using-slurm
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
using-slurm [2013/12/02 14:04] – [Using SLURM] aorth | using-slurm [2022/11/03 11:38] (current) – jean-baka | ||
---|---|---|---|
Line 3: | Line 3: | ||
Our SLURM is configured with the following job queues (also called " | Our SLURM is configured with the following job queues (also called " | ||
+ | |||
* debug | * debug | ||
* batch | * batch | ||
* highmem | * highmem | ||
- | " | + | " |
To see more information about the queue configuration, | To see more information about the queue configuration, | ||
+ | |||
+ | < | ||
+ | Thu Aug 04 15:08:48 2022 | ||
+ | NODELIST | ||
+ | compute03 | ||
+ | compute05 | ||
+ | compute06 | ||
+ | compute07 | ||
+ | hpc 1 debug* | ||
+ | |||
+ | The above tells you, for instance, that compute06 has 64 CPUs. And that a job sent to the " | ||
===== Submitting jobs ===== | ===== Submitting jobs ===== | ||
==== Interactive jobs ==== | ==== Interactive jobs ==== | ||
- | How to get an interactive session, | + | How to get an interactive session, |
< | < | ||
salloc: Granted job allocation 1080 | salloc: Granted job allocation 1080 | ||
- | [aorth@taurus: ~]$</ | + | [aorth@compute05: ~]$</ |
+ | |||
+ | **NB:** interactive jobs have a time limit of 8 hours: if you need more, then you should write an sbatch script. | ||
+ | |||
+ | You can also open an interactive session on a specific node of the cluster by specifying it through the '' | ||
+ | < | ||
+ | salloc: Granted job allocation 16349 | ||
+ | [jbaka@compute03 | ||
==== Batch jobs ==== | ==== Batch jobs ==== | ||
- | Request | + | We are writing a SLURM script below. The parameters in its header request |
- | < | + | < |
#SBATCH -p batch | #SBATCH -p batch | ||
#SBATCH -J blastn | #SBATCH -J blastn | ||
#SBATCH -n 4 | #SBATCH -n 4 | ||
- | export BLASTDB=/ | + | # load the blast module |
- | + | module load blast/2.6.0+ | |
- | module load blast/2.2.28+ | + | |
+ | # run the blast with 4 CPU threads (cores) | ||
blastn -query ~/ | blastn -query ~/ | ||
- | Submit | + | In the above, please **DO NOT FORGET the ' |
- | < | + | |
+ | We then submit | ||
+ | < | ||
Submitted batch job 1082</ | Submitted batch job 1082</ | ||
==== Batch job using local storage ==== | ==== Batch job using local storage ==== | ||
- | Users' home directories | + | Users' home folders |
Instead, you can use a local " | Instead, you can use a local " | ||
- | < | + | < |
#SBATCH -p batch | #SBATCH -p batch | ||
+ | #SBATCH -J blastn | ||
#SBATCH -n 4 | #SBATCH -n 4 | ||
- | #SBATCH -J test | ||
- | WORKDIR=/ | + | # load the blast module |
+ | module load blast/ | ||
+ | |||
+ | WORKDIR=/ | ||
mkdir -p $WORKDIR | mkdir -p $WORKDIR | ||
Line 51: | Line 75: | ||
echo | echo | ||
- | Trinity.pl --seqType fq --JM 10G --single AR1960BN.fastq --output $WORKDIR/AR1960BN.out --CPU 4 --inchworm_cpu 4 --bflyCPU 4</code> | + | # change to working directory on compute node |
+ | cd $WORKDIR | ||
+ | |||
+ | # run the blast with 4 CPU threads (cores) | ||
+ | blastn -query ~/ | ||
+ | |||
+ | All output | ||
- | All output is directed to '' | ||
==== Check queue status ==== | ==== Check queue status ==== | ||
- | < | + | '' |
+ | < | ||
+ | JOBID PARTITION | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | </ |
using-slurm.txt · Last modified: 2022/11/03 11:38 by jean-baka