...
Create a job submission script (text file) script.sh:
Code Block |
---|
#!/bin/bash #SBATCH --account=myaccount #SBATCH --partition=shared #SBATCH --qos=scavenger # #SBATCH --job-name=test #SBATCH --output=output-%j.txt #SBATCH --error=output-%j.txt # #SBATCH --ntasks=1 #SBATCH --mem-per-cpu=100 # #SBATCH --time=10:00 # #SBATCH --gpu geforce_gtx_1080_ti:1 <commands here> |
Then
...
you will need an account (see below). All SLAC users have access to the "shared" partition with a quality of service of "scavenger". This is so that stakeholders of machines in the SDF will get priority access to their resources, whilst any user can use all resources as long as the 'owners' of the hardware isn't wanting to use it. As such, owners (or stakeholders) will have qos "normal" access to their partitions (of which such hosts are also within the shared partition).
Then, in order to submit the job:
Code Block |
---|
module load slurm
sbatch script.sh |
You can then use the command to monitor your job progress:
Code Block |
---|
squeue |
And you can cancel the job with
Code Block |
---|
scancel <jobid> |
How can I request GPUs?
Code Block |
---|
# request single gpu srun -A myaccount -p mypartition1[,mypartition2] -n 1 --gpus 1 --pty /bin/bash # request a gtx 1080 gpu srun -A myaccount -p mypartition1[,mypartition2] -n 1 --gpus geforce_gtx_1080_ti:1 --pty /bin/bash # request a gtx 2080 gpu srun -A myaccount -p mypartition1[,mypartition2] -n 1 --gpus geforce_rtx_2080_ti:1 --pty /bin/bash # request a v100 gpu srun -A myaccount -p mypartition1[,mypartition2] -n 1 --gpus v100:1 --pty /bin/bash |
...