using-the-cluster
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
using-the-cluster [2019/03/25 09:43] – aorth | using-the-cluster [2020/03/19 12:49] – [How to Connect to the Cluster] adding info for UNIX users jean-baka | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== Using the Cluster ====== | ====== Using the Cluster ====== | ||
- | ILRI's high-performance computing " | + | ILRI's high-performance computing " |
- | * hpc: main login node, " | + | |
- | * taurus, compute2, compute04: used for batch and interactive jobs like BLAST, structure, R, etc | + | |
- | * mammoth: used for high-memory jobs like genome assembly (mira, newbler, abyss, etc) | + | |
- | * compute03: fast CPUs, but few of them | + | |
+ | * **compute05**: | ||
+ | |||
+ | To get access to the cluster you should talk to Jean-Baka (he sits in BecA). Once you have access you should read up on [[Using SLURM|SLURM]] so you can learn how to submit jobs to the cluster. | ||
+ | |||
+ | |||
+ | ===== How to Connect to the Cluster ===== | ||
+ | |||
+ | In order to launch computations on the HPC or even just to view files residing in its storage infrastructure, | ||
+ | |||
+ | === If you are running MacOSX (on Apple computers) or any GNU/Linux distribution === | ||
+ | |||
+ | Those operating systems are part of the large family of UNIX systems, that almost invariably contain an already-installed SSH client, most often some flavor of the [[https:// | ||
+ | |||
+ | If this doesn' | ||
+ | |||
+ | |||
- | To get access to the cluster you should talk to [[d.githae@cgiar.org|Dedan Githae]] (he sits in BecA). Once you have access you should read up on [[Using SLURM|SLURM]] so you can learn how to submit jobs to the cluster. | ||
===== Cluster Organization ===== | ===== Cluster Organization ===== | ||
- | The cluster is arranged in a master/ | + | The cluster is arranged in a master/ |
- | {{:infrastructure-2018-fs8.png|}} | + | {{:hpc_topology_2019_web.png|}} |
==== Detailed Information ==== | ==== Detailed Information ==== | ||
Line 16: | Line 32: | ||
^Machine | ^Machine | ||
|taurus|116 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, admixture, etc.|{{https:// | |taurus|116 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, admixture, etc.|{{https:// | ||
- | |mammoth | 516 GB RAM \\ 16 CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc)|{{https:// | + | |mammoth | 516 GB RAM \\ 8 CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc)|{{https:// |
|compute2| 132 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, etc.|{{https:// | |compute2| 132 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, etc.|{{https:// | ||
|compute03 | 442 GB RAM \\ 8 CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc), mothur|{{https:// | |compute03 | 442 GB RAM \\ 8 CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc), mothur|{{https:// | ||
- | |compute04 | 48 GB RAM \\ 8 CPUs | batch jobs \\ Good for BLAST, structure, R, etc that need lots of disk space |{{https:// | + | |compute04 | 48 GB RAM \\ 8 CPUs \\ 10TB scratch |
+ | |compute05 | 384 GB RAM \\ 48 CPUs \\ 1.6TB scratch | batch jobs \\ Most recent AMD EPYC CPUs, good for BLAST, structure, R, etc |{{https:// | ||
===== Backups ===== | ===== Backups ===== | ||
At the moment we don't backup users' data in their respective home folders. We therefore advise users to have their own backups. | At the moment we don't backup users' data in their respective home folders. We therefore advise users to have their own backups. | ||
using-the-cluster.txt · Last modified: 2023/01/06 06:14 by aorth