User Tools

Site Tools


using-slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
using-slurm [2022/08/04 12:10] aorthusing-slurm [2026/02/05 08:31] (current) aorth
Line 39: Line 39:
 ==== Batch jobs ==== ==== Batch jobs ====
 We are writing a SLURM script below. The parameters in its header request 4 CPUs for in the ''batch'' partition, and name our job "blastn". This name is only used internally by SLURM for reporting purposes. So let's go ahead and ceate a file //blast.sbatch//: We are writing a SLURM script below. The parameters in its header request 4 CPUs for in the ''batch'' partition, and name our job "blastn". This name is only used internally by SLURM for reporting purposes. So let's go ahead and ceate a file //blast.sbatch//:
-<code>#!/usr/bin/bash -l # <--- DO NOT FORGET '-l', it enables the module command+<code>#!/usr/bin/bash -l
 #SBATCH -p batch #SBATCH -p batch
 #SBATCH -J blastn #SBATCH -J blastn
Line 49: Line 49:
 # run the blast with 4 CPU threads (cores) # run the blast with 4 CPU threads (cores)
 blastn -query ~/data/sequences/drosoph_14_sequences.seq -db nt -num_threads 4</code> blastn -query ~/data/sequences/drosoph_14_sequences.seq -db nt -num_threads 4</code>
 +
 +In the above, please **DO NOT FORGET the '-l' option** on the first ("sha-bang") line, as it is compulsory for correct interpretation of the ''module load'' commands.
  
 We then submit the script with the ''sbatch'' command: We then submit the script with the ''sbatch'' command:
Line 59: Line 61:
 Instead, you can use a local "scratch" folder on the compute nodes to alleviate this burden, for example: Instead, you can use a local "scratch" folder on the compute nodes to alleviate this burden, for example:
  
-<code>#!/usr/bin/bash -l # <--- DO NOT FORGET '-l', it enables the module command+<code>#!/usr/bin/bash -l
 #SBATCH -p batch #SBATCH -p batch
 #SBATCH -J blastn #SBATCH -J blastn
Line 80: Line 82:
  
 All output is directed to ''$WORKDIR/'', which is the temporary folder on the compute node. See these slides from [[https://alanorth.github.io/hpc-users-group3/#/2|HPC Users Group #3]] for more info. All output is directed to ''$WORKDIR/'', which is the temporary folder on the compute node. See these slides from [[https://alanorth.github.io/hpc-users-group3/#/2|HPC Users Group #3]] for more info.
 +
 +==== Run job using a GPU ====
 +Currently there is only one compute node with GPU capabilities. As of February 2026, compute06 has an NVIDIA Tesla v100 with 32GB of RAM. In order to use this you will need to add an extra "gres" line to your SLURM sbatch job specification.
 +
 +For example, ''beast-gpu.sbatch'':
 +
 +<code>#!/usr/bin/bash
 +#SBATCH -p batch
 +#SBATCH -w compute06
 +#SBATCH --gres=gpu:v100:1
 +#SBATCH -n 8
 +#SBATCH -J beast-GPU
 +
 +# load module(s)
 +module load beagle/3.1.2
 +module load beast/1.10.4
 +
 +beast -beagle_info</code>
 +
  
 ==== Check queue status ==== ==== Check queue status ====
Line 89: Line 110:
  746885     batch    model-selection    jjuma  R 4-20:45:15      8 compute06  746885     batch    model-selection    jjuma  R 4-20:45:15      8 compute06
  746998     batch        interactive  afeleke  R      30:09      1 compute06  746998     batch        interactive  afeleke  R      30:09      1 compute06
- 746999     batch             blastp    aorth  R       7:20      6 compute05 + 746999     batch             blastp    aorth  R       7:20      6 compute05</code>
-</code>+
using-slurm.1659615029.txt.gz · Last modified: by aorth