User Tools

Site Tools


using-the-cluster

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
using-the-cluster [2020/03/19 12:13] – [Using the Cluster] jean-bakausing-the-cluster [2020/03/19 12:16] – [Cluster Organization] jean-baka
Line 1: Line 1:
 ====== Using the Cluster ====== ====== Using the Cluster ======
 ILRI's high-performance computing "cluster" is currently composed of 7 dedicated machines: ILRI's high-performance computing "cluster" is currently composed of 7 dedicated machines:
-  * hpc: main login node, "master" of the cluster +  * **hpc**: main login node, "master" of the cluster 
-  * taurus, compute2, compute04: used for batch and interactive jobs like BLAST, structure, R, etc (compute04 has lots of disk space under its ''/var/scratch''+  * **taurus****compute2****compute04**: used for batch and interactive jobs like BLAST, structure, R, etc (compute04 has lots of disk space under its ''/var/scratch''
-  * mammoth: used for high-memory jobs like genome assembly (mira, newbler, abyss, etc) +  * **mammoth**: used for high-memory jobs like genome assembly (mira, newbler, abyss, etc) 
-  * compute03: fast CPUs, but few of them  +  * **compute03**: fast CPUs, but few of them  
-  * compute05: batch jobs, has the fastest processors (AMD EPYC)+  * **compute05**: batch jobs, has the fastest processors (AMD EPYC)
  
 To get access to the cluster you should talk to Jean-Baka (he sits in BecA). Once you have access you should read up on [[Using SLURM|SLURM]] so you can learn how to submit jobs to the cluster. To get access to the cluster you should talk to Jean-Baka (he sits in BecA). Once you have access you should read up on [[Using SLURM|SLURM]] so you can learn how to submit jobs to the cluster.
  
 ===== Cluster Organization ===== ===== Cluster Organization =====
-The cluster is arranged in a master/slave configuration; users log into HPC (the master) and use it as a "jumping off point" to the rest of the cluster. Here's a diagram of the topology:+The cluster is arranged in a master/slave configuration; users log into HPC (the master) and use it as a "jumping off point" to the rest of the cluster. Here's a diagram of the topology. For each server, we mention the number of CPUs and the year it was commissioned:
 {{:hpc_topology_2019_web.png|}} {{:hpc_topology_2019_web.png|}}
  
using-the-cluster.txt · Last modified: 2023/01/06 06:14 by aorth