using-slurm
Differences
This shows you the differences between two versions of the page.
Next revision | Previous revisionNext revisionBoth sides next revision | ||
using-slurm [2013/05/31 13:13] – external edit 127.0.0.1 | using-slurm [2019/02/01 12:17] – jean-baka | ||
---|---|---|---|
Line 3: | Line 3: | ||
Our SLURM is configured with the following job queues (also called " | Our SLURM is configured with the following job queues (also called " | ||
+ | |||
* debug | * debug | ||
* batch | * batch | ||
* highmem | * highmem | ||
- | " | + | " |
To see more information about the queue configuration, | To see more information about the queue configuration, | ||
Line 17: | Line 18: | ||
salloc: Granted job allocation 1080 | salloc: Granted job allocation 1080 | ||
[aorth@taurus: | [aorth@taurus: | ||
+ | |||
+ | **NB:** interactive jobs have a time limit of 8 hours: if you need more, then you should write a batch script. | ||
+ | |||
+ | You can also open an interactive session on a specific node of the cluster by specifying it through the '' | ||
+ | < | ||
+ | salloc: Granted job allocation 16349 | ||
+ | [jbaka@compute03 ~]$</ | ||
==== Batch jobs ==== | ==== Batch jobs ==== | ||
Request 4 CPUs for a NCBI BLAST+ job in the '' | Request 4 CPUs for a NCBI BLAST+ job in the '' | ||
- | < | + | < |
#SBATCH -p batch | #SBATCH -p batch | ||
+ | #SBATCH -J blastn | ||
#SBATCH -n 4 | #SBATCH -n 4 | ||
- | export BLASTDB=/ | + | # load the blast module |
+ | module load blast/2.6.0+ | ||
+ | # run the blast with 4 CPU threads (cores) | ||
blastn -query ~/ | blastn -query ~/ | ||
Submit the script with '' | Submit the script with '' | ||
- | < | + | < |
Submitted batch job 1082</ | Submitted batch job 1082</ | ||
+ | |||
+ | ==== Batch job using local storage ==== | ||
+ | Users' home folders are mounted over the network (on " | ||
+ | |||
+ | Instead, you can use a local " | ||
+ | |||
+ | < | ||
+ | #SBATCH -p batch | ||
+ | #SBATCH -n 4 | ||
+ | #SBATCH -J blastn | ||
+ | |||
+ | # load the blast module | ||
+ | module load blast/ | ||
+ | |||
+ | WORKDIR=/ | ||
+ | mkdir -p $WORKDIR | ||
+ | |||
+ | echo "Using $WORKDIR on $SLURMD_NODENAME" | ||
+ | echo | ||
+ | |||
+ | # change to working directory on compute node | ||
+ | cd $WORKDIR | ||
+ | |||
+ | # run the blast with 4 CPU threads (cores) | ||
+ | blastn -query ~/ | ||
+ | |||
+ | All output is directed to '' | ||
==== Check queue status ==== | ==== Check queue status ==== | ||
< | < |
using-slurm.txt · Last modified: 2022/11/03 11:38 by jean-baka