User Tools

Site Tools


using-slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Last revisionBoth sides next revision
using-slurm [2022/08/04 12:08] aorthusing-slurm [2022/11/03 11:36] – amended script to remove the comment on the shebang line jean-baka
Line 12: Line 12:
 To see more information about the queue configuration, use ''sinfo -lNe''. To see more information about the queue configuration, use ''sinfo -lNe''.
  
-<code>[jbaka@hpc ~]$ sinfo -lNe +<code>$ sinfo -lNe 
-Fri Feb  1 15:27:44 2019+Thu Aug 04 15:08:48 2022
 NODELIST   NODES PARTITION       STATE CPUS    S:C:T MEMORY TMP_DISK WEIGHT AVAIL_FE REASON               NODELIST   NODES PARTITION       STATE CPUS    S:C:T MEMORY TMP_DISK WEIGHT AVAIL_FE REASON              
-compute2       1     batch        idle   64   64:1:1                 10   (null) none                 +compute03      1   highmem       idle        2:4:1 322249            10   (null) none 
-compute03          batch       mixed       8:1:1                    (null) none                 +compute05          batch       mixed 48     2:24:1 386500            10   (null) none 
-compute03        highmem       mixed       8:1:1                  5   (null) none                 +compute06          batch       mixed 64     2:32:1 257491             5   (null) none 
-compute04          batch       mixed    8    8:1:1      1             5   (null) none                 +compute07        highmem        idle 8       1:8:1 101956             5   (null) none 
-hpc            1    debug*        idle       4:1:1      1        0      1   (null) none                 +hpc            1    debug*        idle 4       1:4:1 128876             1   (null) none</code>
-mammoth        1   highmem        idle    8    8:1:1      1            30   (null) none                 +
-taurus             batch       mixed   64   64:1:                20   (null) none        +
-</code>+
  
-The above tells you, for instance, that compute04 has 8 CPUs while compute2 has 64 CPUs. And that a job sent to the "highmem" partition (a SLURM verb equivalent to "queue", as per the vocabulary in use with other schedulers, e.g. Sun Grid Engine), then it will end up being run on either compute03 or mammoth+The above tells you, for instance, that compute06 has 64 CPUs. And that a job sent to the "highmem" partition (a SLURM verb equivalent to "queue", as per the vocabulary in use with other schedulers, e.g. Sun Grid Engine), then it will end up being run on either compute03 or compute07
  
 ===== Submitting jobs ===== ===== Submitting jobs =====
Line 31: Line 28:
 <code>[aorth@hpc: ~]$ interactive  <code>[aorth@hpc: ~]$ interactive 
 salloc: Granted job allocation 1080 salloc: Granted job allocation 1080
-[aorth@taurus: ~]$</code>+[aorth@compute05: ~]$</code>
  
-**NB:** interactive jobs have a time limit of 8 hours: if you need more, then you should write a batch script.+**NB:** interactive jobs have a time limit of 8 hours: if you need more, then you should write an sbatch script.
  
 You can also open an interactive session on a specific node of the cluster by specifying it through the ''-w'' commandline argument: You can also open an interactive session on a specific node of the cluster by specifying it through the ''-w'' commandline argument:
Line 42: Line 39:
 ==== Batch jobs ==== ==== Batch jobs ====
 We are writing a SLURM script below. The parameters in its header request 4 CPUs for in the ''batch'' partition, and name our job "blastn". This name is only used internally by SLURM for reporting purposes. So let's go ahead and ceate a file //blast.sbatch//: We are writing a SLURM script below. The parameters in its header request 4 CPUs for in the ''batch'' partition, and name our job "blastn". This name is only used internally by SLURM for reporting purposes. So let's go ahead and ceate a file //blast.sbatch//:
-<code>#!/usr/bin/bash -l # <--- DO NOT FORGET '-l', it enables the module command+<code>#!/usr/bin/bash -l
 #SBATCH -p batch #SBATCH -p batch
 #SBATCH -J blastn #SBATCH -J blastn
Line 52: Line 49:
 # run the blast with 4 CPU threads (cores) # run the blast with 4 CPU threads (cores)
 blastn -query ~/data/sequences/drosoph_14_sequences.seq -db nt -num_threads 4</code> blastn -query ~/data/sequences/drosoph_14_sequences.seq -db nt -num_threads 4</code>
 +
 +In the above, please **DO NOT FORGET the '-l' option** on the first ("sha-bang") line, as it enables the module command.
  
 We then submit the script with the ''sbatch'' command: We then submit the script with the ''sbatch'' command:
using-slurm.txt · Last modified: 2022/11/03 11:38 by jean-baka