using-slurm
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
using-slurm [2015/05/11 06:33] – joguya | using-slurm [2019/02/01 12:26] – jean-baka | ||
---|---|---|---|
Line 19: | Line 19: | ||
[aorth@taurus: | [aorth@taurus: | ||
- | **NB:** interactive jobs have a time limit of 8 hours, if you need more then you should write a batch script. | + | **NB:** interactive jobs have a time limit of 8 hours: if you need more, then you should write a batch script. |
+ | |||
+ | You can also open an interactive session on a specific node of the cluster by specifying it through the '' | ||
+ | < | ||
+ | salloc: Granted job allocation 16349 | ||
+ | [jbaka@compute03 ~]$</ | ||
==== Batch jobs ==== | ==== Batch jobs ==== | ||
Request 4 CPUs for a NCBI BLAST+ job in the '' | Request 4 CPUs for a NCBI BLAST+ job in the '' | ||
- | < | + | < |
#SBATCH -p batch | #SBATCH -p batch | ||
#SBATCH -J blastn | #SBATCH -J blastn | ||
Line 28: | Line 34: | ||
# load the blast module | # load the blast module | ||
- | module load blast/2.2.30+ | + | module load blast/2.6.0+ |
# run the blast with 4 CPU threads (cores) | # run the blast with 4 CPU threads (cores) | ||
Line 42: | Line 48: | ||
Instead, you can use a local " | Instead, you can use a local " | ||
- | < | + | < |
#SBATCH -p batch | #SBATCH -p batch | ||
#SBATCH -n 4 | #SBATCH -n 4 | ||
Line 62: | Line 68: | ||
blastn -query ~/ | blastn -query ~/ | ||
- | All output is directed to '' | + | All output is directed to '' |
==== Check queue status ==== | ==== Check queue status ==== | ||
- | < | + | '' |
+ | < | ||
+ | JOBID PARTITION | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | </ | ||
+ | |||
+ | In addition to the information above, it is sometimes useful to know what is the number of CPUs (computing cores) allocated to each job: the scheduler will queue jobs asking for resources that aren't available, most often because the other jobs are eating up all the CPUs available on the host. To get the number of CPUs for each job and display the whole thing nicely, the command is slightly more involved: | ||
- | ==== Receive mail notifications ==== | + | < |
- | To receive mail notifications about the state of your job, add the following lines to your sbatch script: whereby < | + | JOBID PARTITION |
- | # | + | 16330 |
- | #SBATCH --mail-type ALL</ | + | 16339 |
+ | 16340 | ||
+ | 16346 batch velvet_out_ra_10 | ||
+ | 16348 | ||
+ | 16349 | ||
+ | </ | ||
- | Notification mail types(--mail-type) can be BEGIN, END, FAIL, REQUEUE and ALL(any state change). | + | or, alternatively: |
+ | < | ||
+ | USER JOBID | ||
+ | pyumbya | ||
+ | ckeambou | ||
+ | ckeambou | ||
+ | dkiambi | ||
+ | fkibegwa | ||
+ | jbaka | ||
+ | </ | ||
using-slurm.txt · Last modified: 2022/11/03 11:38 by jean-baka