Submitting/running jobs
To topFor details on how to submit (batch or interactive)jobs on SCIAMA-4 using slurm, please read this article on how to do this.
Managing jobs
To topAfter submitting jobs, you can track their progress/status and, if necessary, cancel them. Please check this article for further details on job management.
Migration guide for Torque/PBS users
To topFor people who are used to the “old” Torque/PBS system, here a quick translation table for the most important commands:
Function | Torque | SLURM |
---|---|---|
Interactive shell on compute node | qsub -I | sinteractive |
Batch job submission | qsub | sbatch |
Queue status | qstat | squeue |
Delete job | qdel | scancel |
Hold job | qhold | scontrol hold |
Release job | qrls | scontrol release |
Below is a simple job submission script. Basically #PBS is replaced by #SBATCH.
The Torque line of the form “#PBS -l nodes=2:ppn=2” is replaced by the lines “#SBATCH –nodes=2” and “#SBATCH –ntasks-per-node=2”
A “queue (-q)” in Torque is replaced by a “partition(-p)” in SLURM.
#!/bin/bash #SBATCH --nodes=2 #SBATCH --ntasks-per-node=2 #SBATCH --time=0-2:00 #SBATCH --job-name=starccm #SBATCH -p sciama4.q #SBATCH -D /users/burtong/test-starccm #SBATCh --error=starccm.err.%j #SBATCH --output=starccm.out.%j ##SLURM: ====> Job Node List (DO NOT MODIFY) echo "Slurm nodes assigned :$SLURM_JOB_NODELIST" echo "SLURM_JOBID="$SLURM_JOBID echo "SLURM_JOB_NODELIST"=$SLURM_JOB_NODELIST echo "SLURM_NNODES"=$SLURM_NNODES echo "SLURMTMPDIR="$SLURMTMPDIR echo "working directory = "$SLURM_SUBMIT_DIR echo "SLURM_NTASKS="$SLURM_NTASKS echo ------------------------------------------------------ echo 'This job is allocated on '${SLURM_NTASKS}' cpu(s)' echo 'Job is running on node(s): ' echo $SLURM_JOB_NODELIST echo ------------------------------------------------------ module purge module add starccm/12.06.011 starccm+ -rsh ssh -batch test1.sim -fabric tcp -power -podkey -np ${SLURM_NTASKS}