using-slurm
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
using-slurm [2013/12/02 14:04] – [Using SLURM] aorth | using-slurm [2019/02/01 12:17] – jean-baka | ||
---|---|---|---|
Line 3: | Line 3: | ||
Our SLURM is configured with the following job queues (also called " | Our SLURM is configured with the following job queues (also called " | ||
+ | |||
* debug | * debug | ||
* batch | * batch | ||
* highmem | * highmem | ||
- | " | + | " |
To see more information about the queue configuration, | To see more information about the queue configuration, | ||
Line 17: | Line 18: | ||
salloc: Granted job allocation 1080 | salloc: Granted job allocation 1080 | ||
[aorth@taurus: | [aorth@taurus: | ||
+ | |||
+ | **NB:** interactive jobs have a time limit of 8 hours: if you need more, then you should write a batch script. | ||
+ | |||
+ | You can also open an interactive session on a specific node of the cluster by specifying it through the '' | ||
+ | < | ||
+ | salloc: Granted job allocation 16349 | ||
+ | [jbaka@compute03 ~]$</ | ||
==== Batch jobs ==== | ==== Batch jobs ==== | ||
Request 4 CPUs for a NCBI BLAST+ job in the '' | Request 4 CPUs for a NCBI BLAST+ job in the '' | ||
- | < | + | < |
#SBATCH -p batch | #SBATCH -p batch | ||
#SBATCH -J blastn | #SBATCH -J blastn | ||
#SBATCH -n 4 | #SBATCH -n 4 | ||
- | export BLASTDB=/ | + | # load the blast module |
- | + | module load blast/2.6.0+ | |
- | module load blast/2.2.28+ | + | |
+ | # run the blast with 4 CPU threads (cores) | ||
blastn -query ~/ | blastn -query ~/ | ||
Submit the script with '' | Submit the script with '' | ||
- | < | + | < |
Submitted batch job 1082</ | Submitted batch job 1082</ | ||
==== Batch job using local storage ==== | ==== Batch job using local storage ==== | ||
- | Users' home directories | + | Users' home folders |
Instead, you can use a local " | Instead, you can use a local " | ||
- | < | + | < |
#SBATCH -p batch | #SBATCH -p batch | ||
#SBATCH -n 4 | #SBATCH -n 4 | ||
- | #SBATCH -J test | + | #SBATCH -J blastn |
- | WORKDIR=/ | + | # load the blast module |
+ | module load blast/ | ||
+ | |||
+ | WORKDIR=/ | ||
mkdir -p $WORKDIR | mkdir -p $WORKDIR | ||
Line 51: | Line 62: | ||
echo | echo | ||
- | Trinity.pl --seqType fq --JM 10G --single AR1960BN.fastq --output $WORKDIR/AR1960BN.out --CPU 4 --inchworm_cpu 4 --bflyCPU 4</code> | + | # change to working directory on compute node |
+ | cd $WORKDIR | ||
+ | |||
+ | # run the blast with 4 CPU threads (cores) | ||
+ | blastn -query ~/ | ||
+ | |||
+ | All output | ||
- | All output is directed to '' | ||
==== Check queue status ==== | ==== Check queue status ==== | ||
< | < |
using-slurm.txt · Last modified: 2022/11/03 11:38 by jean-baka