User Tools

Site Tools


using-the-cluster

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
using-the-cluster [2014/03/04 22:21]
aorth
using-the-cluster [2020/03/20 15:02] (current)
jean-baka [If you are running Microsoft Windows]
Line 1: Line 1:
-====== Using the cluster ​====== +====== Using the Cluster ​====== 
-ILRI's high-performance computing "​cluster"​ is currently composed of several ​dedicated machines: +ILRI's high-performance computing "​cluster"​ is currently composed of dedicated machines: 
-  * hpc main login node, "​master"​ of the cluster +  ​* **hpc**: main login node, "​master"​ of the cluster 
-  * taurus,​compute2 ​- good for batch and interactive jobs like BLAST, structure, R, etc +  ​* **taurus****compute2**, **compute04**:​ used for batch and interactive jobs like BLAST, structure, R, etc (compute04 has lots of disk space under its ''/​var/​scratch''​) 
-  * mammoth ​- good for high-memory jobs like genome assembly (mira, newbler, abyss, etc)+  ​* **mammoth**: used for high-memory jobs like genome assembly (mira, newbler, abyss, etc
 +  * **compute03**:​ fast CPUs, but few of them  
 +  * **compute05**:​ batch jobs, has the fastest processors (AMD EPYC)
  
-To get access to the cluster you should talk to [[a.orth@cgiar.org|Alan Orth]] ​(he sits in Lab 2).  Once you have access you should read up on [[Using SLURM|SLURM]] so you can learn how to submit jobs to the cluster.+To get access to the cluster you should talk to Jean-Baka ​(he sits in BecA). Once you have access you should read up on [[Using SLURM|SLURM]] so you can learn how to submit jobs to the cluster.
  
-===== Cluster organization ===== 
-The cluster is arranged in a master/​slave configuration;​ users log into HPC (the master) and use it as a "​jumping off point" to the rest of the cluster. Here's a diagram of the topology: 
-{{ :​hpc_topology.png?​300 |}} 
  
-==== Detailed ​information ​====+===== How to Connect to the Cluster ===== 
 + 
 +Connecting to the HPC **is not done through clicking on the "Log In" link** on the top right corner of these wiki pages. In order to launch computations on the HPC or even just to view files residing in its storage infrastructure,​ users must use the **SSH protocol**. Through this protocol, users gain command-line access to the HPC from an SSH //client// software installed on their own machine (e.g. a laptop, desktop or smartphone). Depending on the operating system you are using on the computer from which you want to establish the connection, the procedure differs: 
 + 
 +==== If you are running MacOSX (on Apple computers) or any GNU/Linux distribution ==== 
 + 
 +Those operating systems are part of the large family of UNIX systems, that almost invariably contain an already-installed SSH client, most often some flavor of the [[https://​www.openssh.com/​|OpenSSH]] client. Just open a terminal emulator and run the command ''​ssh username@hpc.ilri.cgiar.org'',​ where your replace ''​username''​ with your own username on the HPC (as communicated by the person who created your account there). 
 + 
 +If the above doesn'​t work, then you probably have to **install** an ssh client. It suffices to **install the SSH client only**, no need for the SSH server: that one would be useful only if you want to allow remote connections //into// your computer. For instance, you can read about [[https://​www.cyberciti.biz/​faq/​how-to-install-ssh-on-ubuntu-linux-using-apt-get/​|instructions to install openssh-client on Ubuntu GNU/​Linux]]. 
 + 
 + 
 +==== If you are running Microsoft Windows ==== 
 + 
 +If you are running Windows 10, you can access a simple ssh client by [[https://​www.howtogeek.com/​235101/​10-ways-to-open-the-command-prompt-in-windows-10/​|launching the "​command prompt"​]] and then typing in there ''​ssh username@hpc.ilri.cgiar.org'',​ where your replace ''​username''​ with your own username on the HPC (as communicated by the person who created your HPC account). 
 + 
 + 
 +Another option is to [[https://​mobaxterm.mobatek.net/​download-home-edition.html|install MobaXterm]]:​ choose the "​installer edition"​ unless you don't have rights to install stuff on the computer you are using, in which case you will use the "​portable edition"​. Once you have installed MobaXterm, you can setup a new connection by specifying the following connection parameters:​ 
 +  * host: ''​hpc.ilri.cgiar.org''​ 
 +  * port: leave the default SSH port, i.e. port 22 
 +  * username: your username, as communicated by the person who created your HPC account. 
 + 
 +===== Cluster Organization ===== 
 +The cluster is arranged in a master/​slave configuration;​ users log into HPC (the master) and use it as a "​jumping off point" to the rest of the cluster. Here's a diagram of the topology. For each server, we mention the number of CPUs and the year it was commissioned:​ 
 +{{:​hpc_topology_2019_web.png|}} 
 + 
 +==== Detailed ​Information ​====
  
 ^Machine ​          ​^Specifications ​               ^Uses                 ^1 hour status ​  ^ ^Machine ​          ​^Specifications ​               ^Uses                 ^1 hour status ​  ^
-|taurus|- 112 GB RAM \\ 64 CPUs |batch and interactive jobs \\ Good for BLAST, structure, R, etc.|{{http://​hpc.ilri.cgiar.org/​ganglia/​graph.php?​g=load_report&​z=medium&​c=compute&​h=taurus&​m=os_name&​r=hour&​s=descending&​hc=4&​mc=2&​.gif?​}} | +|taurus|116 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, admixture, etc.|{{https://​hpc.ilri.cgiar.org/​ganglia/​graph.php?​g=load_report&​z=medium&​c=compute&​h=taurus&​m=os_name&​r=hour&​s=descending&​hc=4&​mc=2&​.gif?​}} | 
-|mammoth|- 384 GB RAM \\ - 16 CPUs |batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc)|{{http://​hpc.ilri.cgiar.org/​ganglia/​graph.php?​g=load_report&​z=medium&​c=compute&​h=mammoth&​m=os_name&​r=hour&​s=descending&​hc=4&​mc=2&​.gif?​}} |+|mammoth | 516 GB RAM \\ CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc)|{{https://​hpc.ilri.cgiar.org/​ganglia/​graph.php?​g=load_report&​z=medium&​c=compute&​h=mammoth&​m=os_name&​r=hour&​s=descending&​hc=4&​mc=2&​.gif?​}} | 
 +|compute2| 132 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, etc.|{{https://​hpc.ilri.cgiar.org/​ganglia/​graph.php?​g=load_report&​z=medium&​c=compute&​h=compute2&​m=os_name&​r=hour&​s=descending&​hc=4&​mc=2&​.gif?​}} | 
 +|compute03 | 442 GB RAM \\ 8 CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc), mothur|{{https://​hpc.ilri.cgiar.org/​ganglia/​graph.php?​g=load_report&​z=medium&​c=compute&​h=compute03&​m=os_name&​r=hour&​s=descending&​hc=4&​mc=2&​.gif?​}} | 
 +|compute04 | 48 GB RAM \\ 8 CPUs \\ 10TB scratch | batch jobs \\ Good for BLAST, structure, R, etc, that need lots of local disk space (/​var/​scratch/​) |{{https://​hpc.ilri.cgiar.org/​ganglia/​graph.php?​g=load_report&​z=medium&​c=compute&​h=compute04&​m=os_name&​r=hour&​s=descending&​hc=4&​mc=2&​.gif?​}} | 
 +|compute05 | 384 GB RAM \\ 48 CPUs \\ 1.6TB scratch | batch jobs \\ Most recent AMD EPYC CPUs, good for BLAST, structure, R, etc |{{https://​hpc.ilri.cgiar.org/​ganglia/​graph.php?​g=load_report&​z=medium&​c=compute&​h=compute05&​m=os_name&​r=hour&​s=descending&​hc=4&​mc=2&​.gif?​}} | 
 +===== Backups ===== 
 +At the moment we don't backup users' data in their respective home folders. We therefore advise users to have their own backups. 
using-the-cluster.1393960882.txt.gz · Last modified: 2014/03/04 22:21 by aorth