using-slurm
Differences
This shows you the differences between two versions of the page.
Next revision | Previous revisionNext revisionBoth sides next revision | ||
using-slurm [2013/05/31 13:13] – external edit 127.0.0.1 | using-slurm [2022/08/04 12:08] – aorth | ||
---|---|---|---|
Line 3: | Line 3: | ||
Our SLURM is configured with the following job queues (also called " | Our SLURM is configured with the following job queues (also called " | ||
+ | |||
* debug | * debug | ||
* batch | * batch | ||
* highmem | * highmem | ||
- | " | + | " |
To see more information about the queue configuration, | To see more information about the queue configuration, | ||
+ | |||
+ | < | ||
+ | Fri Feb 1 15:27:44 2019 | ||
+ | NODELIST | ||
+ | compute2 | ||
+ | compute03 | ||
+ | compute03 | ||
+ | compute04 | ||
+ | hpc 1 debug* | ||
+ | mammoth | ||
+ | taurus | ||
+ | </ | ||
+ | |||
+ | The above tells you, for instance, that compute04 has 8 CPUs while compute2 has 64 CPUs. And that a job sent to the " | ||
===== Submitting jobs ===== | ===== Submitting jobs ===== | ||
==== Interactive jobs ==== | ==== Interactive jobs ==== | ||
- | How to get an interactive session, | + | How to get an interactive session, |
< | < | ||
salloc: Granted job allocation 1080 | salloc: Granted job allocation 1080 | ||
[aorth@taurus: | [aorth@taurus: | ||
+ | |||
+ | **NB:** interactive jobs have a time limit of 8 hours: if you need more, then you should write a batch script. | ||
+ | |||
+ | You can also open an interactive session on a specific node of the cluster by specifying it through the '' | ||
+ | < | ||
+ | salloc: Granted job allocation 16349 | ||
+ | [jbaka@compute03 ~]$</ | ||
==== Batch jobs ==== | ==== Batch jobs ==== | ||
- | Request | + | We are writing a SLURM script below. The parameters in its header request |
- | < | + | < |
#SBATCH -p batch | #SBATCH -p batch | ||
+ | #SBATCH -J blastn | ||
#SBATCH -n 4 | #SBATCH -n 4 | ||
- | export BLASTDB=/ | + | # load the blast module |
+ | module load blast/2.6.0+ | ||
+ | # run the blast with 4 CPU threads (cores) | ||
blastn -query ~/ | blastn -query ~/ | ||
- | Submit | + | We then submit |
- | < | + | < |
Submitted batch job 1082</ | Submitted batch job 1082</ | ||
+ | |||
+ | ==== Batch job using local storage ==== | ||
+ | Users' home folders are mounted over the network (on " | ||
+ | |||
+ | Instead, you can use a local " | ||
+ | |||
+ | < | ||
+ | #SBATCH -p batch | ||
+ | #SBATCH -J blastn | ||
+ | #SBATCH -n 4 | ||
+ | |||
+ | # load the blast module | ||
+ | module load blast/ | ||
+ | |||
+ | WORKDIR=/ | ||
+ | mkdir -p $WORKDIR | ||
+ | |||
+ | echo "Using $WORKDIR on $SLURMD_NODENAME" | ||
+ | echo | ||
+ | |||
+ | # change to working directory on compute node | ||
+ | cd $WORKDIR | ||
+ | |||
+ | # run the blast with 4 CPU threads (cores) | ||
+ | blastn -query ~/ | ||
+ | |||
+ | All output is directed to '' | ||
==== Check queue status ==== | ==== Check queue status ==== | ||
- | < | + | '' |
+ | < | ||
+ | JOBID PARTITION | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | </ |
using-slurm.txt · Last modified: 2022/11/03 11:38 by jean-baka