using-slurm
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| using-slurm [2022/08/04 12:08] – aorth | using-slurm [2026/02/05 08:31] (current) – aorth | ||
|---|---|---|---|
| Line 12: | Line 12: | ||
| To see more information about the queue configuration, | To see more information about the queue configuration, | ||
| - | < | + | < |
| - | Fri Feb 1 15:27:44 2019 | + | Thu Aug 04 15:08:48 2022 |
| NODELIST | NODELIST | ||
| - | compute2 | + | compute03 |
| - | compute03 | + | compute05 |
| - | compute03 | + | compute06 |
| - | compute04 | + | compute07 |
| - | hpc 1 debug* | + | hpc 1 debug* |
| - | mammoth | + | |
| - | taurus | + | |
| - | </ | + | |
| - | The above tells you, for instance, that compute04 has 8 CPUs while compute2 | + | The above tells you, for instance, that compute06 |
| ===== Submitting jobs ===== | ===== Submitting jobs ===== | ||
| Line 31: | Line 28: | ||
| < | < | ||
| salloc: Granted job allocation 1080 | salloc: Granted job allocation 1080 | ||
| - | [aorth@taurus: ~]$</ | + | [aorth@compute05: ~]$</ |
| - | **NB:** interactive jobs have a time limit of 8 hours: if you need more, then you should write a batch script. | + | **NB:** interactive jobs have a time limit of 8 hours: if you need more, then you should write an sbatch |
| You can also open an interactive session on a specific node of the cluster by specifying it through the '' | You can also open an interactive session on a specific node of the cluster by specifying it through the '' | ||
| Line 42: | Line 39: | ||
| ==== Batch jobs ==== | ==== Batch jobs ==== | ||
| We are writing a SLURM script below. The parameters in its header request 4 CPUs for in the '' | We are writing a SLURM script below. The parameters in its header request 4 CPUs for in the '' | ||
| - | < | + | < |
| #SBATCH -p batch | #SBATCH -p batch | ||
| #SBATCH -J blastn | #SBATCH -J blastn | ||
| Line 52: | Line 49: | ||
| # run the blast with 4 CPU threads (cores) | # run the blast with 4 CPU threads (cores) | ||
| blastn -query ~/ | blastn -query ~/ | ||
| + | |||
| + | In the above, please **DO NOT FORGET the ' | ||
| We then submit the script with the '' | We then submit the script with the '' | ||
| Line 62: | Line 61: | ||
| Instead, you can use a local " | Instead, you can use a local " | ||
| - | < | + | < |
| #SBATCH -p batch | #SBATCH -p batch | ||
| #SBATCH -J blastn | #SBATCH -J blastn | ||
| Line 83: | Line 82: | ||
| All output is directed to '' | All output is directed to '' | ||
| + | |||
| + | ==== Run job using a GPU ==== | ||
| + | Currently there is only one compute node with GPU capabilities. As of February 2026, compute06 has an NVIDIA Tesla v100 with 32GB of RAM. In order to use this you will need to add an extra " | ||
| + | |||
| + | For example, '' | ||
| + | |||
| + | < | ||
| + | #SBATCH -p batch | ||
| + | #SBATCH -w compute06 | ||
| + | #SBATCH --gres=gpu: | ||
| + | #SBATCH -n 8 | ||
| + | #SBATCH -J beast-GPU | ||
| + | |||
| + | # load module(s) | ||
| + | module load beagle/ | ||
| + | module load beast/ | ||
| + | |||
| + | beast -beagle_info</ | ||
| + | |||
| ==== Check queue status ==== | ==== Check queue status ==== | ||
| Line 92: | Line 110: | ||
| | | ||
| | | ||
| - | | + | |
| - | </ | + | |
using-slurm.1659614887.txt.gz · Last modified: by aorth
