User Tools

Site Tools


using-the-cluster

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
using-the-cluster [2020/03/20 12:02] – [If you are running Microsoft Windows] jean-bakausing-the-cluster [2021/06/13 15:09] aorth
Line 2: Line 2:
 ILRI's high-performance computing "cluster" is currently composed of 7 dedicated machines: ILRI's high-performance computing "cluster" is currently composed of 7 dedicated machines:
   * **hpc**: main login node, "master" of the cluster   * **hpc**: main login node, "master" of the cluster
-  * **taurus**, **compute2**, **compute04**: used for batch and interactive jobs like BLAST, structure, R, etc (compute04 has lots of disk space under its ''/var/scratch''+  * **compupte2**, **compute05**, **compute06**: used for batch and interactive jobs like BLAST, structure, R, etc (compute05 and compute06 have the newest AMD EPYC CPUs
-  * **mammoth**: used for high-memory jobs like genome assembly (mira, newbler, abyss, etc) +  * **compute07**: used for high-memory jobs like genome assembly (mira, newbler, abyss, etc) 
-  * **compute03**: fast CPUs, but few of them  +  * **compute03**: fast CPUs, but few of them
-  * **compute05**: batch jobs, has the fastest processors (AMD EPYC)+
  
 To get access to the cluster you should talk to Jean-Baka (he sits in BecA). Once you have access you should read up on [[Using SLURM|SLURM]] so you can learn how to submit jobs to the cluster. To get access to the cluster you should talk to Jean-Baka (he sits in BecA). Once you have access you should read up on [[Using SLURM|SLURM]] so you can learn how to submit jobs to the cluster.
Line 14: Line 13:
 Connecting to the HPC **is not done through clicking on the "Log In" link** on the top right corner of these wiki pages. In order to launch computations on the HPC or even just to view files residing in its storage infrastructure, users must use the **SSH protocol**. Through this protocol, users gain command-line access to the HPC from an SSH //client// software installed on their own machine (e.g. a laptop, desktop or smartphone). Depending on the operating system you are using on the computer from which you want to establish the connection, the procedure differs: Connecting to the HPC **is not done through clicking on the "Log In" link** on the top right corner of these wiki pages. In order to launch computations on the HPC or even just to view files residing in its storage infrastructure, users must use the **SSH protocol**. Through this protocol, users gain command-line access to the HPC from an SSH //client// software installed on their own machine (e.g. a laptop, desktop or smartphone). Depending on the operating system you are using on the computer from which you want to establish the connection, the procedure differs:
  
-==== If you are running MacOSX (on Apple computers) or any GNU/Linux distribution ====+==== If you are running Mac OS X (on Apple computers) or any GNU/Linux distribution ====
  
 Those operating systems are part of the large family of UNIX systems, that almost invariably contain an already-installed SSH client, most often some flavor of the [[https://www.openssh.com/|OpenSSH]] client. Just open a terminal emulator and run the command ''ssh username@hpc.ilri.cgiar.org'', where your replace ''username'' with your own username on the HPC (as communicated by the person who created your account there). Those operating systems are part of the large family of UNIX systems, that almost invariably contain an already-installed SSH client, most often some flavor of the [[https://www.openssh.com/|OpenSSH]] client. Just open a terminal emulator and run the command ''ssh username@hpc.ilri.cgiar.org'', where your replace ''username'' with your own username on the HPC (as communicated by the person who created your account there).
Line 38: Line 37:
  
 ^Machine           ^Specifications                ^Uses                 ^1 hour status   ^ ^Machine           ^Specifications                ^Uses                 ^1 hour status   ^
-|taurus|116 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, admixture, etc.|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=taurus&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | 
-|mammoth | 516 GB RAM \\ 8 CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc)|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=mammoth&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | 
 |compute2| 132 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, etc.|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute2&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | |compute2| 132 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, etc.|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute2&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} |
 |compute03 | 442 GB RAM \\ 8 CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc), mothur|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute03&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | |compute03 | 442 GB RAM \\ 8 CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc), mothur|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute03&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} |
-|compute04 | 48 GB RAM \\ 8 CPUs \\ 10TB scratch | batch jobs \\ Good for BLAST, structure, R, etc, that need lots of local disk space (/var/scratch/) |{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute04&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | 
 |compute05 | 384 GB RAM \\ 48 CPUs \\ 1.6TB scratch | batch jobs \\ Most recent AMD EPYC CPUs, good for BLAST, structure, R, etc |{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute05&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} | |compute05 | 384 GB RAM \\ 48 CPUs \\ 1.6TB scratch | batch jobs \\ Most recent AMD EPYC CPUs, good for BLAST, structure, R, etc |{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute05&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} |
 +|compute06|256 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, admixture, etc.|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute06&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} |
 +|compute07| 1 TB RAM \\ 8 CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc)|{{https://hpc.ilri.cgiar.org/ganglia/graph.php?g=load_report&z=medium&c=compute&h=compute07&m=os_name&r=hour&s=descending&hc=4&mc=2&.gif?}} |
 +
 ===== Backups ===== ===== Backups =====
 At the moment we don't backup users' data in their respective home folders. We therefore advise users to have their own backups. At the moment we don't backup users' data in their respective home folders. We therefore advise users to have their own backups.
- 
using-the-cluster.txt · Last modified: 2023/01/06 06:14 by aorth