User Tools

Site Tools


using-the-cluster

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
using-the-cluster [2018/07/05 06:30] aorthusing-the-cluster [2021/06/13 15:05] aorth
Line 1: Line 1:
 ====== Using the Cluster ====== ====== Using the Cluster ======
-ILRI's high-performance computing "cluster" is currently composed of dedicated machines: +ILRI's high-performance computing "cluster" is currently composed of dedicated machines: 
-  * hpc: main login node, "master" of the cluster +  * **hpc**: main login node, "master" of the cluster 
-  * tauruscompute2compute04: used for batch and interactive jobs like BLAST, structure, R, etc +  * **compupte2****compute05****compute06**: used for batch and interactive jobs like BLAST, structure, R, etc (compute05 and compute06 have the newest AMD EPYC CPUs) 
-  * mammoth: used for high-memory jobs like genome assembly (mira, newbler, abyss, etc) +  * **compute07**: used for high-memory jobs like genome assembly (mira, newbler, abyss, etc) 
-  * compute03: fast CPUs, but few of them+  * **compute03**: fast CPUs, but few of them
  
-To get access to the cluster you should talk to [[d.githae@cgiar.org|Dedan Githae]] (he sits in BecA). Once you have access you should read up on [[Using SLURM|SLURM]] so you can learn how to submit jobs to the cluster.+To get access to the cluster you should talk to Jean-Baka (he sits in BecA). Once you have access you should read up on [[Using SLURM|SLURM]] so you can learn how to submit jobs to the cluster
 + 
 + 
 +===== How to Connect to the Cluster ===== 
 + 
 +Connecting to the HPC **is not done through clicking on the "Log In" link** on the top right corner of these wiki pages. In order to launch computations on the HPC or even just to view files residing in its storage infrastructure, users must use the **SSH protocol**. Through this protocol, users gain command-line access to the HPC from an SSH //client// software installed on their own machine (e.g. a laptop, desktop or smartphone). Depending on the operating system you are using on the computer from which you want to establish the connection, the procedure differs: 
 + 
 +==== If you are running MacOSX (on Apple computers) or any GNU/Linux distribution ==== 
 + 
 +Those operating systems are part of the large family of UNIX systems, that almost invariably contain an already-installed SSH client, most often some flavor of the [[https://www.openssh.com/|OpenSSH]] client. Just open a terminal emulator and run the command ''ssh username@hpc.ilri.cgiar.org'', where your replace ''username'' with your own username on the HPC (as communicated by the person who created your account there). 
 + 
 +If the above doesn't work, then you probably have to **install** an ssh client. It suffices to **install the SSH client only**, no need for the SSH server: that one would be useful only if you want to allow remote connections //into// your computer. For instance, you can read about [[https://www.cyberciti.biz/faq/how-to-install-ssh-on-ubuntu-linux-using-apt-get/|instructions to install openssh-client on Ubuntu GNU/Linux]]. 
 + 
 + 
 +==== If you are running Microsoft Windows ==== 
 + 
 +If you are running Windows 10, you can access a simple ssh client by [[https://www.howtogeek.com/235101/10-ways-to-open-the-command-prompt-in-windows-10/|launching the "command prompt"]] and then typing in there ''ssh username@hpc.ilri.cgiar.org'', where your replace ''username'' with your own username on the HPC (as communicated by the person who created your HPC account). 
 + 
 + 
 +Another option is to [[https://mobaxterm.mobatek.net/download-home-edition.html|install MobaXterm]]: choose the "installer edition" unless you don't have rights to install stuff on the computer you are using, in which case you will use the "portable edition". Once you have installed MobaXterm, you can setup a new connection by specifying the following connection parameters: 
 +  * host: ''hpc.ilri.cgiar.org'' 
 +  * port: leave the default SSH port, i.e. port 22 
 +  * username: your username, as communicated by the person who created your HPC account.
  
 ===== Cluster Organization ===== ===== Cluster Organization =====
-The cluster is arranged in a master/slave configuration; users log into HPC (the master) and use it as a "jumping off point" to the rest of the cluster. Here's a diagram of the topology: +The cluster is arranged in a master/slave configuration; users log into HPC (the master) and use it as a "jumping off point" to the rest of the cluster. Here's a diagram of the topology. For each server, we mention the number of CPUs and the year it was commissioned
-{{:ilri-hpc-topology.png?350|}} +{{:hpc_topology_2019_web.png|}}
-{{:infrastructure-2018-fs8.png?350|}}+
  
 ==== Detailed Information ==== ==== Detailed Information ====
  
 ^Machine           ^Specifications                ^Uses                 ^1 hour status   ^ ^Machine           ^Specifications                ^Uses                 ^1 hour status   ^
-|taurus|116 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, admixture, etc.|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=taurus&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | 
-|mammoth | 516 GB RAM \\ 16 CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc)|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=mammoth&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | 
 |compute2| 132 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, etc.|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute2&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | |compute2| 132 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, etc.|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute2&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} |
 |compute03 | 442 GB RAM \\ 8 CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc), mothur|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute03&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | |compute03 | 442 GB RAM \\ 8 CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc), mothur|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute03&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} |
 +|compute05 | 384 GB RAM \\ 48 CPUs \\ 1.6TB scratch | batch jobs \\ Most recent AMD EPYC CPUs, good for BLAST, structure, R, etc |{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute05&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} |
 +|compute06|256 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, admixture, etc.|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute06&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} |
 +|compute07| 1 TB RAM \\ 8 CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc)|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute07&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} |
  
 ===== Backups ===== ===== Backups =====
 At the moment we don't backup users' data in their respective home folders. We therefore advise users to have their own backups. At the moment we don't backup users' data in their respective home folders. We therefore advise users to have their own backups.
- 
using-the-cluster.txt · Last modified: 2023/01/06 06:14 by aorth