User Tools

Site Tools


using-slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
using-slurm [2013/12/02 14:04] – [Using SLURM] aorthusing-slurm [2017/06/07 06:28] aorth
Line 3: Line 3:
  
 Our SLURM is configured with the following job queues (also called "partitions" in SLURM): Our SLURM is configured with the following job queues (also called "partitions" in SLURM):
 +
   * debug   * debug
   * batch   * batch
   * highmem   * highmem
  
-"debug" is the default queue, which is useful for testing job parameters, program paths, etc. The runtime limit of the "debug" partition is 5 minutes, after which jobs are killed.+"debug" is the default queue, which is useful for testing job parameters, program paths, etc. The run-time limit of the "debug" partition is 5 minutes, after which jobs are killed.
  
 To see more information about the queue configuration, use ''sinfo -lNe''. To see more information about the queue configuration, use ''sinfo -lNe''.
Line 18: Line 19:
 [aorth@taurus: ~]$</code> [aorth@taurus: ~]$</code>
  
 +**NB:** interactive jobs have a time limit of 8 hours, if you need more then you should write a batch script.
 ==== Batch jobs ==== ==== Batch jobs ====
 Request 4 CPUs for a NCBI BLAST+ job in the ''batch'' partition.  Create a file //blast.sbatch//: Request 4 CPUs for a NCBI BLAST+ job in the ''batch'' partition.  Create a file //blast.sbatch//:
-<code>#!/bin/bash+<code>#!/usr/bin/env bash
 #SBATCH -p batch #SBATCH -p batch
 #SBATCH -J blastn #SBATCH -J blastn
 #SBATCH -n 4 #SBATCH -n 4
  
-export BLASTDB=/export/data/bio/ncbi/blast/db +# load the blast module 
- +module load blast/2.6.0+
-module load blast/2.2.28++
  
 +# run the blast with 4 CPU threads (cores)
 blastn -query ~/data/sequences/drosoph_14_sequences.seq -db nt -num_threads 4</code> blastn -query ~/data/sequences/drosoph_14_sequences.seq -db nt -num_threads 4</code>
  
 Submit the script with ''sbatch'': Submit the script with ''sbatch'':
-<code>$ sbatch blastn_test.sh +<code>$ sbatch blast.sbatch 
 Submitted batch job 1082</code> Submitted batch job 1082</code>
  
 ==== Batch job using local storage ==== ==== Batch job using local storage ====
-Users' home directories are mounted over the network (on "wingu"), so when you're on mammoth or taurus any time you write to the disk (ie job output) has to go round trip over the network.+Users' home folders are mounted over the network (on "wingu"), so when you're on mammoth or taurus any time you write to the disk (ie job output) has to go round trip over the network.
  
 Instead, you can use a local "scratch" folder on the compute nodes to alleviate this burden, for example: Instead, you can use a local "scratch" folder on the compute nodes to alleviate this burden, for example:
Line 43: Line 45:
 #SBATCH -p batch #SBATCH -p batch
 #SBATCH -n 4 #SBATCH -n 4
-#SBATCH -J test+#SBATCH -J blastn 
 + 
 +# load the blast module 
 +module load blast/2.2.30+
  
-WORKDIR=/var/scratch/$SLURM_JOBID+WORKDIR=/var/scratch/$USER/$SLURM_JOBID
 mkdir -p $WORKDIR mkdir -p $WORKDIR
  
Line 51: Line 56:
 echo echo
  
-Trinity.pl --seqType fq --JM 10G --single AR1960BN.fastq --output $WORKDIR/AR1960BN.out --CPU 4 --inchworm_cpu 4 --bflyCPU 4</code>+# change to working directory on compute node 
 +cd $WORKDIR 
 + 
 +# run the blast with 4 CPU threads (cores) 
 +blastn -query ~/data/sequences/drosoph_14_sequences.seq -db nt -num_threads 4 -out blast.out</code> 
 + 
 +All output is directed to ''$WORKDIR/'', which is the temporary folder on the compute nodeSee these slides from [[http://alanorth.github.io/hpc-users-group3/#/2|HPC Users Group #3]] for more info.
  
-All output is directed to ''$WORKDIR/'', which is the temporary folder on the compute node.  See these slides from [[http://alanorth.github.io/hpc-users-group3/#/2|HPC Users Group #3]] for more info. 
 ==== Check queue status ==== ==== Check queue status ====
 <code>squeue</code> <code>squeue</code>
 +
 +==== Receive mail notifications ====
 +To receive mail notifications about the state of your job, add the following lines to your sbatch script: whereby <EMAIL_ADDRESS> is your email address<code>
 +#SBATCH --mail-user <EMAIL_ADDRESS>
 +#SBATCH --mail-type ALL</code>
 +
 +Notification mail types(--mail-type) can be BEGIN, END, FAIL, REQUEUE and ALL(any state change).
 +
 +Example:
 +<code>
 +#SBATCH --mail-user J.Doe@cgiar.org
 +#SBATCH --mail-type ALL</code>
 +
using-slurm.txt · Last modified: 2022/11/03 11:38 by jean-baka