User Tools

Site Tools


This is an old revision of the document!

Using the cluster

ILRI's high-performance computing "cluster" is currently composed of several dedicated machines:

  • hpc - main login node, "master" of the cluster
  • taurus,compute2 - good for batch and interactive jobs like BLAST, structure, R, etc
  • mammoth - good for high-memory jobs like genome assembly (mira, newbler, abyss, etc)

To get access to the cluster you should talk to Alan Orth (he sits in Lab 2). Once you have access you should read up on SLURM so you can learn how to submit jobs to the cluster.

Cluster organization

The cluster is arranged in a master/slave configuration; users log into HPC (the master) and use it as a "jumping off point" to the rest of the cluster. Here's a diagram of the topology:

Detailed information

Machine Specifications Uses 1 hour status
taurus- 112 GB RAM
- 64 CPUs
- batch and interactive jobs
- Good for BLAST, structure, R, etc.
mammoth- 512 GB RAM
- 16 CPUs
- batch and high-memory jobs
- Good for genome assembly (mira, newbler, abyss, etc)
using-the-cluster.1407334174.txt.gz · Last modified: 2014/08/06 14:09 by joguya