using-slurm
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| using-slurm [2017/06/07 06:28] – aorth | using-slurm [2026/02/05 08:31] (current) – aorth | ||
|---|---|---|---|
| Line 8: | Line 8: | ||
| * highmem | * highmem | ||
| - | " | + | " |
| To see more information about the queue configuration, | To see more information about the queue configuration, | ||
| + | |||
| + | < | ||
| + | Thu Aug 04 15:08:48 2022 | ||
| + | NODELIST | ||
| + | compute03 | ||
| + | compute05 | ||
| + | compute06 | ||
| + | compute07 | ||
| + | hpc 1 debug* | ||
| + | |||
| + | The above tells you, for instance, that compute06 has 64 CPUs. And that a job sent to the " | ||
| ===== Submitting jobs ===== | ===== Submitting jobs ===== | ||
| ==== Interactive jobs ==== | ==== Interactive jobs ==== | ||
| - | How to get an interactive session, | + | How to get an interactive session, |
| < | < | ||
| salloc: Granted job allocation 1080 | salloc: Granted job allocation 1080 | ||
| - | [aorth@taurus: ~]$</ | + | [aorth@compute05: ~]$</ |
| + | |||
| + | **NB:** interactive jobs have a time limit of 8 hours: if you need more, then you should write an sbatch script. | ||
| + | |||
| + | You can also open an interactive session on a specific node of the cluster by specifying it through the '' | ||
| + | < | ||
| + | salloc: Granted job allocation 16349 | ||
| + | [jbaka@compute03 | ||
| - | **NB:** interactive jobs have a time limit of 8 hours, if you need more then you should write a batch script. | ||
| ==== Batch jobs ==== | ==== Batch jobs ==== | ||
| - | Request | + | We are writing a SLURM script below. The parameters in its header request |
| - | < | + | < |
| #SBATCH -p batch | #SBATCH -p batch | ||
| #SBATCH -J blastn | #SBATCH -J blastn | ||
| Line 33: | Line 50: | ||
| blastn -query ~/ | blastn -query ~/ | ||
| - | Submit | + | In the above, please **DO NOT FORGET the ' |
| + | |||
| + | We then submit | ||
| < | < | ||
| Submitted batch job 1082</ | Submitted batch job 1082</ | ||
| Line 42: | Line 61: | ||
| Instead, you can use a local " | Instead, you can use a local " | ||
| - | < | + | < |
| #SBATCH -p batch | #SBATCH -p batch | ||
| - | #SBATCH -n 4 | ||
| #SBATCH -J blastn | #SBATCH -J blastn | ||
| + | #SBATCH -n 4 | ||
| # load the blast module | # load the blast module | ||
| Line 62: | Line 81: | ||
| blastn -query ~/ | blastn -query ~/ | ||
| - | All output is directed to '' | + | All output is directed to '' |
| - | ==== Check queue status | + | ==== Run job using a GPU ==== |
| - | < | + | Currently there is only one compute node with GPU capabilities. As of February 2026, compute06 has an NVIDIA Tesla v100 with 32GB of RAM. In order to use this you will need to add an extra " |
| - | ==== Receive mail notifications ==== | + | For example, '' |
| - | To receive mail notifications about the state of your job, add the following lines to your sbatch | + | |
| - | #SBATCH --mail-user < | + | |
| - | #SBATCH --mail-type ALL</ | + | |
| - | Notification mail types(--mail-type) can be BEGIN, END, FAIL, REQUEUE and ALL(any state change). | + | < |
| + | # | ||
| + | # | ||
| + | # | ||
| + | #SBATCH -n 8 | ||
| + | #SBATCH -J beast-GPU | ||
| - | Example: | + | # load module(s) |
| - | < | + | module load beagle/3.1.2 |
| - | #SBATCH --mail-user J.Doe@cgiar.org | + | module load beast/1.10.4 |
| - | #SBATCH --mail-type ALL</code> | + | |
| + | beast -beagle_info</ | ||
| + | |||
| + | |||
| + | ==== Check queue status ==== | ||
| + | '' | ||
| + | < | ||
| + | JOBID PARTITION | ||
| + | | ||
| + | | ||
| + | | ||
| + | | ||
| + | | ||
using-slurm.1496816929.txt.gz · Last modified: by aorth
