User Tools

Site Tools


using-the-cluster

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
using-the-cluster [2015/03/18 18:44]
aorth
using-the-cluster [2019/05/31 14:40] (current)
jean-baka [Using the Cluster]
Line 1: Line 1:
-====== Using the cluster ​====== +====== Using the Cluster ​====== 
-ILRI's high-performance computing "​cluster"​ is currently composed of several ​dedicated machines: +ILRI's high-performance computing "​cluster"​ is currently composed of dedicated machines: 
-  * hpc main login node, "​master"​ of the cluster +  * hpcmain login node, "​master"​ of the cluster 
-  * taurus,​compute2 ​- good for batch and interactive jobs like BLAST, structure, R, etc +  * taurus, compute2, compute04: used for batch and interactive jobs like BLAST, structure, R, etc 
-  * mammoth ​- good for high-memory jobs like genome assembly (mira, newbler, abyss, etc)+  * mammoth: used for high-memory jobs like genome assembly (mira, newbler, abyss, etc
 +  * compute03: fast CPUs, but few of them 
 +  * compute04: batch jobs, has lots of disk space in ''/​var/​scratch''​ 
 +  * compute05: batch jobs, has the fastest processors (AMD EPYC)
  
-To get access to the cluster you should talk to [[a.orth@cgiar.org|Alan Orth]] ​(he sits in Lab 2).  Once you have access you should read up on [[Using SLURM|SLURM]] so you can learn how to submit jobs to the cluster.+To get access to the cluster you should talk to Jean-Baka ​(he sits in BecA). Once you have access you should read up on [[Using SLURM|SLURM]] so you can learn how to submit jobs to the cluster.
  
-===== Cluster ​organization ​=====+===== Cluster ​Organization ​=====
 The cluster is arranged in a master/​slave configuration;​ users log into HPC (the master) and use it as a "​jumping off point" to the rest of the cluster. Here's a diagram of the topology: The cluster is arranged in a master/​slave configuration;​ users log into HPC (the master) and use it as a "​jumping off point" to the rest of the cluster. Here's a diagram of the topology:
-{{ :hpc_topology.png?300 |}}+{{:infrastructure-2018-fs8.png|}}
  
-==== Detailed ​information ​====+==== Detailed ​Information ​====
  
 ^Machine ​          ​^Specifications ​               ^Uses                 ^1 hour status ​  ^ ^Machine ​          ​^Specifications ​               ^Uses                 ^1 hour status ​  ^
-|taurus|- 112 GB RAM \\ 64 CPUs |batch and interactive jobs \\ Good for BLAST, structure, R, etc.|{{http://​hpc.ilri.cgiar.org/​ganglia/​graph.php?​g=load_report&​z=medium&​c=compute&​h=taurus&​m=os_name&​r=hour&​s=descending&​hc=4&​mc=2&​.gif?​}} | +|taurus|116 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, admixture, etc.|{{https://​hpc.ilri.cgiar.org/​ganglia/​graph.php?​g=load_report&​z=medium&​c=compute&​h=taurus&​m=os_name&​r=hour&​s=descending&​hc=4&​mc=2&​.gif?​}} | 
-|mammoth|- 512 GB RAM \\ - 16 CPUs |batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc)|{{http://​hpc.ilri.cgiar.org/​ganglia/​graph.php?​g=load_report&​z=medium&​c=compute&​h=mammoth&​m=os_name&​r=hour&​s=descending&​hc=4&​mc=2&​.gif?​}} | +|mammoth | 516 GB RAM \\ CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc)|{{https://​hpc.ilri.cgiar.org/​ganglia/​graph.php?​g=load_report&​z=medium&​c=compute&​h=mammoth&​m=os_name&​r=hour&​s=descending&​hc=4&​mc=2&​.gif?​}} | 
-|compute2|- 128 GB RAM \\ - 16 CPUs |batch and interactive jobs \\ Good for BLAST, structure, R, etc.|{{http://​hpc.ilri.cgiar.org/​ganglia/​graph.php?​g=load_report&​z=medium&​c=compute&​h=compute2&​m=os_name&​r=hour&​s=descending&​hc=4&​mc=2&​.gif?​}} |+|compute2| ​132 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, etc.|{{https://​hpc.ilri.cgiar.org/​ganglia/​graph.php?​g=load_report&​z=medium&​c=compute&​h=compute2&​m=os_name&​r=hour&​s=descending&​hc=4&​mc=2&​.gif?​}} | 
 +|compute03 | 442 GB RAM \\ 8 CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc), mothur|{{https://​hpc.ilri.cgiar.org/​ganglia/​graph.php?​g=load_report&​z=medium&​c=compute&​h=compute03&​m=os_name&​r=hour&​s=descending&​hc=4&​mc=2&​.gif?​}} | 
 +|compute04 | 48 GB RAM \\ 8 CPUs \\ 10TB scratch | batch jobs \\ Good for BLAST, structure, R, etc that need lots of disk space |{{https://​hpc.ilri.cgiar.org/​ganglia/​graph.php?​g=load_report&​z=medium&​c=compute&​h=compute04&​m=os_name&​r=hour&​s=descending&​hc=4&​mc=2&​.gif?​}} | 
 +|compute05 | 384 GB RAM \\ 48 CPUs \\ 1.6TB scratch | batch jobs \\ Most recent AMD EPYC CPUs, good for BLAST, structure, R, etc |{{https://​hpc.ilri.cgiar.org/​ganglia/​graph.php?​g=load_report&​z=medium&​c=compute&​h=compute05&​m=os_name&​r=hour&​s=descending&​hc=4&​mc=2&​.gif?​}} | 
 +===== Backups ===== 
 +At the moment we don't backup users' data in their respective home folders. We therefore advise users to have their own backups. 
using-the-cluster.1426693462.txt.gz · Last modified: 2015/03/18 18:44 by aorth