User Tools

Site Tools


using-the-cluster

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
Next revisionBoth sides next revision
using-the-cluster [2013/07/09 08:54] – created aorthusing-the-cluster [2020/03/19 12:16] – [Cluster Organization] jean-baka
Line 1: Line 1:
-====== Using the cluster ====== +====== Using the Cluster ====== 
-ILRI's high-performance computing "cluster" is currently composed of several dedicated machines: +ILRI's high-performance computing "cluster" is currently composed of dedicated machines: 
-  * hpc main login node, "master" of the cluster +  * **hpc**: main login node, "master" of the cluster 
-  * taurus - good for batch and interactive jobs like BLAST, structure, R, etc +  * **taurus**, **compute2**, **compute04**: used for batch and interactive jobs like BLAST, structure, R, etc (compute04 has lots of disk space under its ''/var/scratch'') 
-  * mammoth - good for high-memory jobs like genome assembly (mira, newbler, abyss, etc)+  * **mammoth**: used for high-memory jobs like genome assembly (mira, newbler, abyss, etc
 +  * **compute03**: fast CPUs, but few of them  
 +  * **compute05**: batch jobs, has the fastest processors (AMD EPYC)
  
-To get access to the cluster you should talk to [[a.orth@cgiar.org|Alan Orth]] (he sits in Lab 2).  Once you have access you should read up on [[UsingSLURM|SLURM]] so you can learn how to submit jobs to the cluster.+To get access to the cluster you should talk to Jean-Baka (he sits in BecA). Once you have access you should read up on [[Using SLURM|SLURM]] so you can learn how to submit jobs to the cluster.
  
-===== Cluster organization ===== +===== Cluster Organization ===== 
-The cluster is arranged in a master/slave configuration; users log into HPC (the master) and use it as a "jumping off point" to the rest of the cluster. Here's a diagram of the topology: +The cluster is arranged in a master/slave configuration; users log into HPC (the master) and use it as a "jumping off point" to the rest of the cluster. Here's a diagram of the topology. For each server, we mention the number of CPUs and the year it was commissioned
-{{ :hpc_topology.png?300 |}}+{{:hpc_topology_2019_web.png|}}
  
-==== Detailed information ====+==== Detailed Information ====
  
 ^Machine           ^Specifications                ^Uses                 ^1 hour status   ^ ^Machine           ^Specifications                ^Uses                 ^1 hour status   ^
-|taurus|- 128 GB RAM \\ 64 CPUs |batch and interactive jobs \\ Good for BLAST, structure, R, etc.|{{http://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=taurus&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | +|taurus|116 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, admixture, etc.|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=taurus&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | 
-|mammoth|- 384 GB RAM \\ - 16 CPUs |batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc)|{{http://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=mammoth&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} |+|mammoth | 516 GB RAM \\ CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc)|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=mammoth&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | 
 +|compute2| 132 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, etc.|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute2&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | 
 +|compute03 | 442 GB RAM \\ 8 CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc), mothur|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute03&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | 
 +|compute04 | 48 GB RAM \\ 8 CPUs \\ 10TB scratch | batch jobs \\ Good for BLAST, structure, R, etc that need lots of disk space |{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute04&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | 
 +|compute05 | 384 GB RAM \\ 48 CPUs \\ 1.6TB scratch | batch jobs \\ Most recent AMD EPYC CPUs, good for BLAST, structure, R, etc |{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute05&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | 
 +===== Backups ===== 
 +At the moment we don't backup users' data in their respective home folders. We therefore advise users to have their own backups. 
using-the-cluster.txt · Last modified: 2023/01/06 06:14 by aorth