User Tools

Site Tools


using-the-cluster

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
using-the-cluster [2020/03/20 12:01] – [How to Connect to the Cluster] jean-bakausing-the-cluster [2021/06/13 15:05] aorth
Line 2: Line 2:
 ILRI's high-performance computing "cluster" is currently composed of 7 dedicated machines: ILRI's high-performance computing "cluster" is currently composed of 7 dedicated machines:
   * **hpc**: main login node, "master" of the cluster   * **hpc**: main login node, "master" of the cluster
-  * **taurus**, **compute2**, **compute04**: used for batch and interactive jobs like BLAST, structure, R, etc (compute04 has lots of disk space under its ''/var/scratch''+  * **compupte2**, **compute05**, **compute06**: used for batch and interactive jobs like BLAST, structure, R, etc (compute05 and compute06 have the newest AMD EPYC CPUs
-  * **mammoth**: used for high-memory jobs like genome assembly (mira, newbler, abyss, etc) +  * **compute07**: used for high-memory jobs like genome assembly (mira, newbler, abyss, etc) 
-  * **compute03**: fast CPUs, but few of them  +  * **compute03**: fast CPUs, but few of them
-  * **compute05**: batch jobs, has the fastest processors (AMD EPYC)+
  
 To get access to the cluster you should talk to Jean-Baka (he sits in BecA). Once you have access you should read up on [[Using SLURM|SLURM]] so you can learn how to submit jobs to the cluster. To get access to the cluster you should talk to Jean-Baka (he sits in BecA). Once you have access you should read up on [[Using SLURM|SLURM]] so you can learn how to submit jobs to the cluster.
Line 26: Line 25:
  
  
-Another option is to [[https://mobaxterm.mobatek.net/download-home-edition.html|install MobaXterm]]: choose the "installer edition" unless you don't have rights to install stuff on the computer you are using, in which case you will use the "portable edition"). Once you have installed MobaXterm, you can setup a new connection by specifying the following connection parameters:+Another option is to [[https://mobaxterm.mobatek.net/download-home-edition.html|install MobaXterm]]: choose the "installer edition" unless you don't have rights to install stuff on the computer you are using, in which case you will use the "portable edition". Once you have installed MobaXterm, you can setup a new connection by specifying the following connection parameters:
   * host: ''hpc.ilri.cgiar.org''   * host: ''hpc.ilri.cgiar.org''
   * port: leave the default SSH port, i.e. port 22   * port: leave the default SSH port, i.e. port 22
Line 38: Line 37:
  
 ^Machine           ^Specifications                ^Uses                 ^1 hour status   ^ ^Machine           ^Specifications                ^Uses                 ^1 hour status   ^
-|taurus|116 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, admixture, etc.|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=taurus&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | 
-|mammoth | 516 GB RAM \\ 8 CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc)|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=mammoth&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | 
 |compute2| 132 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, etc.|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute2&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | |compute2| 132 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, etc.|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute2&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} |
 |compute03 | 442 GB RAM \\ 8 CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc), mothur|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute03&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | |compute03 | 442 GB RAM \\ 8 CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc), mothur|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute03&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} |
-|compute04 | 48 GB RAM \\ 8 CPUs \\ 10TB scratch | batch jobs \\ Good for BLAST, structure, R, etc, that need lots of local disk space (/var/scratch/) |{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute04&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | 
 |compute05 | 384 GB RAM \\ 48 CPUs \\ 1.6TB scratch | batch jobs \\ Most recent AMD EPYC CPUs, good for BLAST, structure, R, etc |{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute05&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | |compute05 | 384 GB RAM \\ 48 CPUs \\ 1.6TB scratch | batch jobs \\ Most recent AMD EPYC CPUs, good for BLAST, structure, R, etc |{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute05&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} |
 +|compute06|256 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, admixture, etc.|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute06&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} |
 +|compute07| 1 TB RAM \\ 8 CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc)|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute07&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} |
 +
 ===== Backups ===== ===== Backups =====
 At the moment we don't backup users' data in their respective home folders. We therefore advise users to have their own backups. At the moment we don't backup users' data in their respective home folders. We therefore advise users to have their own backups.
- 
using-the-cluster.txt · Last modified: 2023/01/06 06:14 by aorth