using-the-cluster
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
using-the-cluster [2019/12/02 12:37] – [Cluster Organization] jean-baka | using-the-cluster [2020/03/20 12:01] – [How to Connect to the Cluster] jean-baka | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== Using the Cluster ====== | ====== Using the Cluster ====== | ||
- | ILRI's high-performance computing " | + | ILRI's high-performance computing " |
- | * hpc: main login node, " | + | |
- | * taurus, compute2, compute04: used for batch and interactive jobs like BLAST, structure, R, etc | + | |
- | * mammoth: used for high-memory jobs like genome assembly (mira, newbler, abyss, etc) | + | |
- | * compute03: fast CPUs, but few of them | + | |
- | * compute04: batch jobs, has lots of disk space in ''/ | + | * **compute05**: batch jobs, has the fastest processors (AMD EPYC) |
- | | + | |
To get access to the cluster you should talk to Jean-Baka (he sits in BecA). Once you have access you should read up on [[Using SLURM|SLURM]] so you can learn how to submit jobs to the cluster. | To get access to the cluster you should talk to Jean-Baka (he sits in BecA). Once you have access you should read up on [[Using SLURM|SLURM]] so you can learn how to submit jobs to the cluster. | ||
+ | |||
+ | |||
+ | ===== How to Connect to the Cluster ===== | ||
+ | |||
+ | " | ||
+ | |||
+ | ==== If you are running MacOSX (on Apple computers) or any GNU/Linux distribution ==== | ||
+ | |||
+ | Those operating systems are part of the large family of UNIX systems, that almost invariably contain an already-installed SSH client, most often some flavor of the [[https:// | ||
+ | |||
+ | If the above doesn' | ||
+ | |||
+ | |||
+ | ==== If you are running Microsoft Windows ==== | ||
+ | |||
+ | If you are running Windows 10, you can access a simple ssh client by [[https:// | ||
+ | |||
+ | |||
+ | Another option is to [[https:// | ||
+ | * host: '' | ||
+ | * port: leave the default SSH port, i.e. port 22 | ||
+ | * username: your username, as communicated by the person who created your HPC account. | ||
===== Cluster Organization ===== | ===== Cluster Organization ===== | ||
- | The cluster is arranged in a master/ | + | The cluster is arranged in a master/ |
- | {{:hpc_topology_2019.png|}} | + | {{:hpc_topology_2019_web.png|}} |
==== Detailed Information ==== | ==== Detailed Information ==== | ||
Line 21: | Line 42: | ||
|compute2| 132 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, etc.|{{https:// | |compute2| 132 GB RAM \\ 64 CPUs | batch and interactive jobs \\ Good for BLAST, structure, R, etc.|{{https:// | ||
|compute03 | 442 GB RAM \\ 8 CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc), mothur|{{https:// | |compute03 | 442 GB RAM \\ 8 CPUs | batch and high-memory jobs \\ Good for genome assembly (mira, newbler, abyss, etc), mothur|{{https:// | ||
- | |compute04 | 48 GB RAM \\ 8 CPUs \\ 10TB scratch | batch jobs \\ Good for BLAST, structure, R, etc that need lots of disk space |{{https:// | + | |compute04 | 48 GB RAM \\ 8 CPUs \\ 10TB scratch | batch jobs \\ Good for BLAST, structure, R, etc, that need lots of local disk space (/ |
|compute05 | 384 GB RAM \\ 48 CPUs \\ 1.6TB scratch | batch jobs \\ Most recent AMD EPYC CPUs, good for BLAST, structure, R, etc |{{https:// | |compute05 | 384 GB RAM \\ 48 CPUs \\ 1.6TB scratch | batch jobs \\ Most recent AMD EPYC CPUs, good for BLAST, structure, R, etc |{{https:// | ||
===== Backups ===== | ===== Backups ===== | ||
At the moment we don't backup users' data in their respective home folders. We therefore advise users to have their own backups. | At the moment we don't backup users' data in their respective home folders. We therefore advise users to have their own backups. | ||
using-the-cluster.txt · Last modified: 2023/01/06 06:14 by aorth