using-slurm
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
using-slurm [2020/10/08 13:06] – [Batch jobs] jean-baka | using-slurm [2022/11/03 11:38] (current) – jean-baka | ||
---|---|---|---|
Line 12: | Line 12: | ||
To see more information about the queue configuration, | To see more information about the queue configuration, | ||
- | < | + | < |
- | Fri Feb 1 15:27:44 2019 | + | Thu Aug 04 15:08:48 2022 |
NODELIST | NODELIST | ||
- | compute2 | + | compute03 |
- | compute03 | + | compute05 |
- | compute03 | + | compute06 |
- | compute04 | + | compute07 |
- | hpc 1 debug* | + | hpc 1 debug* |
- | mammoth | + | |
- | taurus | + | |
- | </ | + | |
- | The above tells you, for instance, that compute04 has 8 CPUs while compute2 | + | The above tells you, for instance, that compute06 |
===== Submitting jobs ===== | ===== Submitting jobs ===== | ||
Line 31: | Line 28: | ||
< | < | ||
salloc: Granted job allocation 1080 | salloc: Granted job allocation 1080 | ||
- | [aorth@taurus: ~]$</ | + | [aorth@compute05: ~]$</ |
- | **NB:** interactive jobs have a time limit of 8 hours: if you need more, then you should write a batch script. | + | **NB:** interactive jobs have a time limit of 8 hours: if you need more, then you should write an sbatch |
You can also open an interactive session on a specific node of the cluster by specifying it through the '' | You can also open an interactive session on a specific node of the cluster by specifying it through the '' | ||
Line 42: | Line 39: | ||
==== Batch jobs ==== | ==== Batch jobs ==== | ||
We are writing a SLURM script below. The parameters in its header request 4 CPUs for in the '' | We are writing a SLURM script below. The parameters in its header request 4 CPUs for in the '' | ||
- | < | + | < |
#SBATCH -p batch | #SBATCH -p batch | ||
#SBATCH -J blastn | #SBATCH -J blastn | ||
Line 52: | Line 49: | ||
# run the blast with 4 CPU threads (cores) | # run the blast with 4 CPU threads (cores) | ||
blastn -query ~/ | blastn -query ~/ | ||
+ | |||
+ | In the above, please **DO NOT FORGET the ' | ||
We then submit the script with the '' | We then submit the script with the '' | ||
Line 62: | Line 61: | ||
Instead, you can use a local " | Instead, you can use a local " | ||
- | < | + | < |
#SBATCH -p batch | #SBATCH -p batch | ||
#SBATCH -J blastn | #SBATCH -J blastn | ||
Line 86: | Line 85: | ||
==== Check queue status ==== | ==== Check queue status ==== | ||
'' | '' | ||
- | < | + | < |
- | | + | JOBID PARTITION |
- | 16330 batch interact | + | 746596 |
- | 16339 batch interact ckeambou | + | 746597 |
- | 16340 batch interact ckeambou | + | 746885 |
- | 16346 batch velvet_o | + | 746998 |
- | 16348 batch interact fkibegwa | + | 746999 |
- | | + | |
</ | </ | ||
- | |||
- | In addition to the information above, it is sometimes useful to know what is the number of CPUs (computing cores) allocated to each job: the scheduler will queue jobs asking for resources that aren't available, most often because the other jobs are eating up all the CPUs available on the host. To get the number of CPUs for each job and display the whole thing nicely, the command is slightly more involved: | ||
- | |||
- | < | ||
- | JOBID PARTITION | ||
- | 16330 | ||
- | 16339 | ||
- | 16340 | ||
- | 16346 batch velvet_out_ra_10 | ||
- | 16348 | ||
- | 16349 | ||
- | </ | ||
- | |||
- | or, alternatively: | ||
- | |||
- | < | ||
- | USER JOBID | ||
- | pyumbya | ||
- | ckeambou | ||
- | ckeambou | ||
- | dkiambi | ||
- | fkibegwa | ||
- | jbaka | ||
- | </ | ||
- |
using-slurm.1602162411.txt.gz · Last modified: 2020/10/08 13:06 by jean-baka