using-the-cluster
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
using-the-cluster [2020/03/19 14:34] – [Cluster Organization] jean-baka | using-the-cluster [2023/01/06 06:14] (current) – aorth | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== Using the Cluster ====== | ====== Using the Cluster ====== | ||
- | ILRI's high-performance computing | + | ILRI's high-performance computing cluster is currently composed of four dedicated |
- | * **hpc**: main login node, "master" | + | |
- | * **taurus**, **compute2**, | + | |
- | * **mammoth**: | + | |
- | * **compute03**: | + | |
- | * **compute05**: batch jobs, has the fastest processors (AMD EPYC) | + | |
- | To get access to the cluster you should talk to Jean-Baka | + | * **hpc**: main login node, " |
+ | * **compute05**, | ||
+ | * **compute07**: | ||
+ | |||
+ | To get access to the cluster you should talk to Alan Orth or Jean-Baka in the Data and Research Methods Unit (Mara House). Once you have access you should read up on [[Using SLURM|SLURM]] so you can learn how to submit jobs to the cluster. | ||
===== How to Connect to the Cluster ===== | ===== How to Connect to the Cluster ===== | ||
- | In order to launch computations on the HPC or even just to view files residing in its storage infrastructure, | + | Connecting to the HPC **is not done through clicking on the "Log In" link** on the top right corner of these wiki pages. |
- | ==== If you are running MacOSX | + | ==== macOS (on Apple computers) or GNU/Linux ==== |
Those operating systems are part of the large family of UNIX systems, that almost invariably contain an already-installed SSH client, most often some flavor of the [[https:// | Those operating systems are part of the large family of UNIX systems, that almost invariably contain an already-installed SSH client, most often some flavor of the [[https:// | ||
Line 21: | Line 20: | ||
- | ==== If you are running | + | ==== Microsoft Windows ==== |
- | If you are running Windows 10, you can access a simple ssh client by [[https:// | + | If you are running Windows 10+, you can access a simple ssh client by [[https:// |
- | Another option is to [[https:// | + | Another option is to [[https:// |
* host: '' | * host: '' | ||
* port: leave the default SSH port, i.e. port 22 | * port: leave the default SSH port, i.e. port 22 | ||
Line 32: | Line 31: | ||
===== Cluster Organization ===== | ===== Cluster Organization ===== | ||
- | The cluster is arranged in a master/slave configuration; users log into HPC (the master) and use it as a " | + | The cluster is arranged in a head/compute |
{{: | {{: | ||
Line 38: | Line 37: | ||
^Machine | ^Machine | ||
- | |taurus|116 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, admixture, etc.|{{https:// | ||
- | |mammoth | 516 GB RAM \\ 8 CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc)|{{https:// | ||
- | |compute2| 132 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, etc.|{{https:// | ||
- | |compute03 | 442 GB RAM \\ 8 CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc), mothur|{{https:// | ||
- | |compute04 | 48 GB RAM \\ 8 CPUs \\ 10TB scratch | batch jobs \\ Good for BLAST, structure, R, etc, that need lots of local disk space (/ | ||
|compute05 | 384 GB RAM \\ 48 CPUs \\ 1.6TB scratch | batch jobs \\ Most recent AMD EPYC CPUs, good for BLAST, structure, R, etc |{{https:// | |compute05 | 384 GB RAM \\ 48 CPUs \\ 1.6TB scratch | batch jobs \\ Most recent AMD EPYC CPUs, good for BLAST, structure, R, etc |{{https:// | ||
+ | |compute06|256 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, admixture, etc.|{{https:// | ||
+ | |compute07| 1 TB RAM \\ 8 CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc)|{{https:// | ||
+ | |||
===== Backups ===== | ===== Backups ===== | ||
At the moment we don't backup users' data in their respective home folders. We therefore advise users to have their own backups. | At the moment we don't backup users' data in their respective home folders. We therefore advise users to have their own backups. | ||
- |
using-the-cluster.1584628497.txt.gz · Last modified: 2020/03/19 14:34 by jean-baka