User Tools

Site Tools


using-slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
using-slurm [2013/12/02 14:21] aorthusing-slurm [2019/02/01 12:20] – [Check queue status] jean-baka
Line 18: Line 18:
 salloc: Granted job allocation 1080 salloc: Granted job allocation 1080
 [aorth@taurus: ~]$</code> [aorth@taurus: ~]$</code>
 +
 +**NB:** interactive jobs have a time limit of 8 hours: if you need more, then you should write a batch script.
 +
 +You can also open an interactive session on a specific node of the cluster by specifying it through the ''-w'' commandline argument:
 +<code>[jbaka@hpc ~]$ interactive -w compute03
 +salloc: Granted job allocation 16349
 +[jbaka@compute03 ~]$</code>
  
 ==== Batch jobs ==== ==== Batch jobs ====
 Request 4 CPUs for a NCBI BLAST+ job in the ''batch'' partition.  Create a file //blast.sbatch//: Request 4 CPUs for a NCBI BLAST+ job in the ''batch'' partition.  Create a file //blast.sbatch//:
-<code>#!/bin/bash+<code>#!/usr/bin/env bash
 #SBATCH -p batch #SBATCH -p batch
 #SBATCH -J blastn #SBATCH -J blastn
Line 27: Line 34:
  
 # load the blast module # load the blast module
-module load blast/2.2.28++module load blast/2.6.0+
  
 # run the blast with 4 CPU threads (cores) # run the blast with 4 CPU threads (cores)
Line 33: Line 40:
  
 Submit the script with ''sbatch'': Submit the script with ''sbatch'':
-<code>$ sbatch blastn_test.sh +<code>$ sbatch blast.sbatch 
 Submitted batch job 1082</code> Submitted batch job 1082</code>
  
Line 41: Line 48:
 Instead, you can use a local "scratch" folder on the compute nodes to alleviate this burden, for example: Instead, you can use a local "scratch" folder on the compute nodes to alleviate this burden, for example:
  
-<code>#!/bin/env bash+<code>#!/usr/bin/env bash
 #SBATCH -p batch #SBATCH -p batch
 #SBATCH -n 4 #SBATCH -n 4
Line 47: Line 54:
  
 # load the blast module # load the blast module
-module load blast/2.2.28++module load blast/2.2.30+
  
-WORKDIR=/var/scratch/$SLURM_JOBID+WORKDIR=/var/scratch/$USER/$SLURM_JOBID
 mkdir -p $WORKDIR mkdir -p $WORKDIR
  
 echo "Using $WORKDIR on $SLURMD_NODENAME" echo "Using $WORKDIR on $SLURMD_NODENAME"
 echo echo
 +
 +# change to working directory on compute node
 +cd $WORKDIR
  
 # run the blast with 4 CPU threads (cores) # run the blast with 4 CPU threads (cores)
-blastn -query ~/data/sequences/drosoph_14_sequences.seq -db nt -num_threads 4 -out $WORKDIR/output</code>+blastn -query ~/data/sequences/drosoph_14_sequences.seq -db nt -num_threads 4 -out blast.out</code>
  
-All output is directed to ''$WORKDIR/'', which is the temporary folder on the compute node.  See these slides from [[http://alanorth.github.io/hpc-users-group3/#/2|HPC Users Group #3]] for more info.+All output is directed to ''$WORKDIR/'', which is the temporary folder on the compute node. See these slides from [[https://alanorth.github.io/hpc-users-group3/#/2|HPC Users Group #3]] for more info.
  
 ==== Check queue status ==== ==== Check queue status ====
-<code>squeue</code>+''squeue'' is the command to use to get more information about the different jobs that are running on the cluster, waiting in a queue for resources to become available, or halted for some reason: 
 +<code>[jbaka@compute03 ~]$ squeue 
 +             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON) 
 +             16330     batch interact  pyumbya  R    6:33:26      1 taurus 
 +             16339     batch interact ckeambou  R    5:19:07      1 compute04 
 +             16340     batch interact ckeambou  R    5:12:52      1 compute04 
 +             16346     batch velvet_o  dkiambi  R    1:39:09      1 compute04 
 +             16348     batch interact fkibegwa  R      22:38      1 taurus 
 +             16349     batch interact    jbaka  R       3:27      1 compute03 
 +</code>
using-slurm.txt · Last modified: 2022/11/03 11:38 by jean-baka