User Tools

Site Tools


using-slurm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
Next revisionBoth sides next revision
using-slurm [2013/05/31 13:13] – external edit 127.0.0.1using-slurm [2013/12/02 14:05] – [Using SLURM] aorth
Line 3: Line 3:
  
 Our SLURM is configured with the following job queues (also called "partitions" in SLURM): Our SLURM is configured with the following job queues (also called "partitions" in SLURM):
-  * debug + 
-  * batch +  * debug -- default partition, used for testing 
-  * highmem+  * batch -- used for when you have many jobs to run 
 +  * highmem -- used for when you have jobs with high-memory requirements
  
 "debug" is the default queue, which is useful for testing job parameters, program paths, etc. The runtime limit of the "debug" partition is 5 minutes, after which jobs are killed. "debug" is the default queue, which is useful for testing job parameters, program paths, etc. The runtime limit of the "debug" partition is 5 minutes, after which jobs are killed.
Line 22: Line 23:
 <code>#!/bin/bash <code>#!/bin/bash
 #SBATCH -p batch #SBATCH -p batch
 +#SBATCH -J blastn
 #SBATCH -n 4 #SBATCH -n 4
  
 export BLASTDB=/export/data/bio/ncbi/blast/db export BLASTDB=/export/data/bio/ncbi/blast/db
 +
 +module load blast/2.2.28+
  
 blastn -query ~/data/sequences/drosoph_14_sequences.seq -db nt -num_threads 4</code> blastn -query ~/data/sequences/drosoph_14_sequences.seq -db nt -num_threads 4</code>
Line 32: Line 36:
 Submitted batch job 1082</code> Submitted batch job 1082</code>
  
 +==== Batch job using local storage ====
 +Users' home directories are mounted over the network (on "wingu"), so when you're on mammoth or taurus any time you write to the disk (ie job output) has to go round trip over the network.
 +
 +Instead, you can use a local "scratch" folder on the compute nodes to alleviate this burden, for example:
 +
 +<code>#!/bin/env bash
 +#SBATCH -p batch
 +#SBATCH -n 4
 +#SBATCH -J test
 +
 +WORKDIR=/var/scratch/$SLURM_JOBID
 +mkdir -p $WORKDIR
 +
 +echo "Using $WORKDIR on $SLURMD_NODENAME"
 +echo
 +
 +Trinity.pl --seqType fq --JM 10G --single AR1960BN.fastq --output $WORKDIR/AR1960BN.out --CPU 4 --inchworm_cpu 4 --bflyCPU 4</code>
 +
 +All output is directed to ''$WORKDIR/'', which is the temporary folder on the compute node.  See these slides from [[http://alanorth.github.io/hpc-users-group3/#/2|HPC Users Group #3]] for more info.
 ==== Check queue status ==== ==== Check queue status ====
 <code>squeue</code> <code>squeue</code>
using-slurm.txt · Last modified: 2022/11/03 11:38 by jean-baka