User Tools

Site Tools


using-slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
using-slurm [2015/05/11 06:33] joguyausing-slurm [2019/02/01 12:17] jean-baka
Line 19: Line 19:
 [aorth@taurus: ~]$</code> [aorth@taurus: ~]$</code>
  
-**NB:** interactive jobs have a time limit of 8 hoursif you need more then you should write a batch script.+**NB:** interactive jobs have a time limit of 8 hoursif you need morethen you should write a batch script. 
 + 
 +You can also open an interactive session on a specific node of the cluster by specifying it through the ''-w'' commandline argument: 
 +<code>[jbaka@hpc ~]$ interactive -w compute03 
 +salloc: Granted job allocation 16349 
 +[jbaka@compute03 ~]$</code> 
 ==== Batch jobs ==== ==== Batch jobs ====
 Request 4 CPUs for a NCBI BLAST+ job in the ''batch'' partition.  Create a file //blast.sbatch//: Request 4 CPUs for a NCBI BLAST+ job in the ''batch'' partition.  Create a file //blast.sbatch//:
-<code>#!/bin/env bash+<code>#!/usr/bin/env bash
 #SBATCH -p batch #SBATCH -p batch
 #SBATCH -J blastn #SBATCH -J blastn
Line 28: Line 34:
  
 # load the blast module # load the blast module
-module load blast/2.2.30++module load blast/2.6.0+
  
 # run the blast with 4 CPU threads (cores) # run the blast with 4 CPU threads (cores)
Line 42: Line 48:
 Instead, you can use a local "scratch" folder on the compute nodes to alleviate this burden, for example: Instead, you can use a local "scratch" folder on the compute nodes to alleviate this burden, for example:
  
-<code>#!/bin/env bash+<code>#!/usr/bin/env bash
 #SBATCH -p batch #SBATCH -p batch
 #SBATCH -n 4 #SBATCH -n 4
Line 62: Line 68:
 blastn -query ~/data/sequences/drosoph_14_sequences.seq -db nt -num_threads 4 -out blast.out</code> blastn -query ~/data/sequences/drosoph_14_sequences.seq -db nt -num_threads 4 -out blast.out</code>
  
-All output is directed to ''$WORKDIR/'', which is the temporary folder on the compute node. See these slides from [[http://alanorth.github.io/hpc-users-group3/#/2|HPC Users Group #3]] for more info.+All output is directed to ''$WORKDIR/'', which is the temporary folder on the compute node. See these slides from [[https://alanorth.github.io/hpc-users-group3/#/2|HPC Users Group #3]] for more info.
  
 ==== Check queue status ==== ==== Check queue status ====
 <code>squeue</code> <code>squeue</code>
- 
-==== Receive mail notifications ==== 
-To receive mail notifications about the state of your job, add the following lines to your sbatch script: whereby <EMAIL_ADDRESS> is your email address<code> 
-#SBATCH --mail-user <EMAIL_ADDRESS> 
-#SBATCH --mail-type ALL</code> 
- 
-Notification mail types(--mail-type) can be BEGIN, END, FAIL, REQUEUE and ALL(any state change). 
- 
- 
using-slurm.txt · Last modified: 2022/11/03 11:38 by jean-baka