Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Info
titleSlurm Batch Usage

For generic advice on running in batch, see Running on SLAC Central Linux.  Note that the actual batch system has changed and we have not updated the doc to reflect that. This is advice on copying data to local scratch, etc. If you find you cannot submit jobs to the fermi:users repo, ask for access in the #s3df-migration slack channel.

  • LSB_JOBID -> SLURM_JOB_ID
  • scratch space during job execution:
    • at job start, a directory is automatically created on the scratch of the worker: ${LSCRATCH} = /lscratch/${USER}/slurm_job_id_${SLURM_JOB_ID}
    • once all of a user's jobs on a node are completed/exited, their corresponding LSCRATCH directory on that host is deleted.

You need to specify an account and "repo" on your slurm submissions. The repos allow subdivision of our allocation to different uses. There are 4 repos available under the fermi account. The format is "–-account fermi:<repo>" where repo is one of:

  • default (jobs are pre-emptible - if "paying jobs" need slots, pre-emptible jobs will be killed)
  • L1
  • other-pipelines
  • users 

L1 and other-pipelines are restricted to known pipelines. Non-default repos have quality of service (qos) defaulting to normal (non-pre-emptible).

At time of writing, there is no accounting yet. When that is enabled, we'll have to decide how to split up our allocation into the various repos.

S3DF Slurm organizes the different hardware resource type under Slurm partitions. Slurm doesn't have the concept of batch queue. Users can specify the resource their job needs (because, for example a 12-core CPU request can be satisfied by different types of CPUs). The following is an example script that submits a job to Slurm:

#!/bin/bash
#SBATCH --account=fermi:users
##SBATCH --partition=milano
#SBATCH --job-name=my_first_job
#SBATCH --output=output-%j.txt
#SBATCH --error=output-%j.txt
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=2
#SBATCH --mem-per-cpu=4g
#SBATCH --time=0-00:10:00
hostname

Note that the specifying "--gpus a100:1" option is preferred over the specifying "–partition=ampere" (the latter is not needed). If GPU is not requested, you job will not have access to a GPU even if it is landed on an ampere node.

...