User Tools

Site Tools


using-slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
using-slurm [2022/11/03 11:38] jean-bakausing-slurm [2026/02/05 08:31] (current) aorth
Line 82: Line 82:
  
 All output is directed to ''$WORKDIR/'', which is the temporary folder on the compute node. See these slides from [[https://alanorth.github.io/hpc-users-group3/#/2|HPC Users Group #3]] for more info. All output is directed to ''$WORKDIR/'', which is the temporary folder on the compute node. See these slides from [[https://alanorth.github.io/hpc-users-group3/#/2|HPC Users Group #3]] for more info.
 +
 +==== Run job using a GPU ====
 +Currently there is only one compute node with GPU capabilities. As of February 2026, compute06 has an NVIDIA Tesla v100 with 32GB of RAM. In order to use this you will need to add an extra "gres" line to your SLURM sbatch job specification.
 +
 +For example, ''beast-gpu.sbatch'':
 +
 +<code>#!/usr/bin/bash
 +#SBATCH -p batch
 +#SBATCH -w compute06
 +#SBATCH --gres=gpu:v100:1
 +#SBATCH -n 8
 +#SBATCH -J beast-GPU
 +
 +# load module(s)
 +module load beagle/3.1.2
 +module load beast/1.10.4
 +
 +beast -beagle_info</code>
 +
  
 ==== Check queue status ==== ==== Check queue status ====
Line 91: Line 110:
  746885     batch    model-selection    jjuma  R 4-20:45:15      8 compute06  746885     batch    model-selection    jjuma  R 4-20:45:15      8 compute06
  746998     batch        interactive  afeleke  R      30:09      1 compute06  746998     batch        interactive  afeleke  R      30:09      1 compute06
- 746999     batch             blastp    aorth  R       7:20      6 compute05 + 746999     batch             blastp    aorth  R       7:20      6 compute05</code>
-</code>+
using-slurm.txt · Last modified: by aorth