User Tools

Site Tools


using-the-cluster

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
Next revisionBoth sides next revision
using-the-cluster [2013/07/09 08:54] – created aorthusing-the-cluster [2020/03/19 12:45] – Added the section on how to actually connect to the HPC jean-baka
Line 1: Line 1:
-====== Using the cluster ====== +====== Using the Cluster ====== 
-ILRI's high-performance computing "cluster" is currently composed of several dedicated machines: +ILRI's high-performance computing "cluster" is currently composed of dedicated machines: 
-  * hpc main login node, "master" of the cluster +  * **hpc**: main login node, "master" of the cluster 
-  * taurus - good for batch and interactive jobs like BLAST, structure, R, etc +  * **taurus**, **compute2**, **compute04**: used for batch and interactive jobs like BLAST, structure, R, etc (compute04 has lots of disk space under its ''/var/scratch'') 
-  * mammoth - good for high-memory jobs like genome assembly (mira, newbler, abyss, etc)+  * **mammoth**: used for high-memory jobs like genome assembly (mira, newbler, abyss, etc
 +  * **compute03**: fast CPUs, but few of them  
 +  * **compute05**: batch jobs, has the fastest processors (AMD EPYC)
  
-To get access to the cluster you should talk to [[a.orth@cgiar.org|Alan Orth]] (he sits in Lab 2).  Once you have access you should read up on [[UsingSLURM|SLURM]] so you can learn how to submit jobs to the cluster.+To get access to the cluster you should talk to Jean-Baka (he sits in BecA). Once you have access you should read up on [[Using SLURM|SLURM]] so you can learn how to submit jobs to the cluster.
  
-===== Cluster organization ===== 
-The cluster is arranged in a master/slave configuration; users log into HPC (the master) and use it as a "jumping off point" to the rest of the cluster. Here's a diagram of the topology: 
-{{ :hpc_topology.png?300 |}} 
  
-==== Detailed information ====+===== How to Connect to the Cluster ===== 
 + 
 +In order to launch computations on the HPC or even just to view files residing in its storage infrastructure, users must use the **SSH protocol**. Through this protocol, users gain command-line access to the HPC from an SSH //client// software installed on their own machine (e.g. a laptop, desktop or smartphone). Depending on the operating system you are using on the computer from which you want to establish the connection, the procedure differs: 
 + 
 +== If you are running MacOSX (on Apple computers) or any GNU/Linux distribution == 
 + 
 +Those operating systems are part of the large family of UNIX systems, that almost invariably contain an already-installed SSH client, most often some flavor of the [[https://www.openssh.com/|OpenSSH]] client. Just open a terminal emulator and run the command ''ssh username@hpc.ilri.cgiar.org'', where your replace ''username'' with your own username on the HPC (as communicated by the person who created your account there). 
 + 
 + 
 + 
 + 
 + 
 +===== Cluster Organization ===== 
 +The cluster is arranged in a master/slave configuration; users log into HPC (the master) and use it as a "jumping off point" to the rest of the cluster. Here's a diagram of the topology. For each server, we mention the number of CPUs and the year it was commissioned: 
 +{{:hpc_topology_2019_web.png|}} 
 + 
 +==== Detailed Information ====
  
 ^Machine           ^Specifications                ^Uses                 ^1 hour status   ^ ^Machine           ^Specifications                ^Uses                 ^1 hour status   ^
-|taurus|- 128 GB RAM \\ 64 CPUs |batch and interactive jobs \\ Good for BLAST, structure, R, etc.|{{http://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=taurus&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | +|taurus|116 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, admixture, etc.|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=taurus&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | 
-|mammoth|- 384 GB RAM \\ - 16 CPUs |batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc)|{{http://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=mammoth&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} |+|mammoth | 516 GB RAM \\ CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc)|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=mammoth&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | 
 +|compute2| 132 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, etc.|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute2&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | 
 +|compute03 | 442 GB RAM \\ 8 CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc), mothur|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute03&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | 
 +|compute04 | 48 GB RAM \\ 8 CPUs \\ 10TB scratch | batch jobs \\ Good for BLAST, structure, R, etc, that need lots of local disk space (/var/scratch/) |{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute04&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | 
 +|compute05 | 384 GB RAM \\ 48 CPUs \\ 1.6TB scratch | batch jobs \\ Most recent AMD EPYC CPUs, good for BLAST, structure, R, etc |{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute05&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | 
 +===== Backups ===== 
 +At the moment we don't backup users' data in their respective home folders. We therefore advise users to have their own backups. 
using-the-cluster.txt · Last modified: 2023/01/06 06:14 by aorth