using-the-cluster
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
using-the-cluster [2021/06/13 15:09] – aorth | using-the-cluster [2023/01/06 06:14] (current) – aorth | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== Using the Cluster ====== | ====== Using the Cluster ====== | ||
- | ILRI's high-performance computing | + | ILRI's high-performance computing cluster is currently composed of four dedicated |
- | * **hpc**: main login node, "master" of the cluster | + | |
- | * **compupte2**, | + | * **hpc**: main login node, "head" of the cluster |
+ | * **compute05**, | ||
* **compute07**: | * **compute07**: | ||
- | * **compute03**: | ||
- | To get access to the cluster you should talk to Jean-Baka | + | To get access to the cluster you should talk to Alan Orth or Jean-Baka in the Data and Research Methods Unit (Mara House). Once you have access you should read up on [[Using SLURM|SLURM]] so you can learn how to submit jobs to the cluster. |
Line 13: | Line 13: | ||
Connecting to the HPC **is not done through clicking on the "Log In" link** on the top right corner of these wiki pages. In order to launch computations on the HPC or even just to view files residing in its storage infrastructure, | Connecting to the HPC **is not done through clicking on the "Log In" link** on the top right corner of these wiki pages. In order to launch computations on the HPC or even just to view files residing in its storage infrastructure, | ||
- | ==== If you are running Mac OS X (on Apple computers) or any GNU/ | + | ==== macOS (on Apple computers) or GNU/Linux ==== |
Those operating systems are part of the large family of UNIX systems, that almost invariably contain an already-installed SSH client, most often some flavor of the [[https:// | Those operating systems are part of the large family of UNIX systems, that almost invariably contain an already-installed SSH client, most often some flavor of the [[https:// | ||
Line 20: | Line 20: | ||
- | ==== If you are running | + | ==== Microsoft Windows ==== |
- | If you are running Windows 10, you can access a simple ssh client by [[https:// | + | If you are running Windows 10+, you can access a simple ssh client by [[https:// |
Line 31: | Line 31: | ||
===== Cluster Organization ===== | ===== Cluster Organization ===== | ||
- | The cluster is arranged in a master/slave configuration; users log into HPC (the master) and use it as a " | + | The cluster is arranged in a head/compute |
{{: | {{: | ||
Line 37: | Line 37: | ||
^Machine | ^Machine | ||
- | |compute2| 132 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, etc.|{{https:// | ||
- | |compute03 | 442 GB RAM \\ 8 CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc), mothur|{{https:// | ||
|compute05 | 384 GB RAM \\ 48 CPUs \\ 1.6TB scratch | batch jobs \\ Most recent AMD EPYC CPUs, good for BLAST, structure, R, etc |{{https:// | |compute05 | 384 GB RAM \\ 48 CPUs \\ 1.6TB scratch | batch jobs \\ Most recent AMD EPYC CPUs, good for BLAST, structure, R, etc |{{https:// | ||
|compute06|256 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, admixture, etc.|{{https:// | |compute06|256 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, admixture, etc.|{{https:// |
using-the-cluster.1623596990.txt.gz · Last modified: 2021/06/13 15:09 by aorth