Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.


titleSlurm Batch Usage

For generic advice on running in batch, see Running on SLAC Central Linux.  Note that the actual batch system has changed and we have not updated the doc to reflect that. This is advice on copying data to local scratch, etc.

  • scratch space during job execution:
    • at job start, a directory is automatically created on the scratch of the worker: ${LSCRATCH} = /lscratch/${USER}/slurm_job_id_${SLURM_JOB_ID}
    • once all of a user's jobs on a node are completed/exited, their corresponding LSCRATCH directory on that host is deleted.

You need to specify an account and "repo" on your slurm submissions. The repos allow subdivision of our allocation to different uses. There are 4 repos available under the fermi account. The format is "–-account fermi:<repo>" where repo is one of:

  • default (jobs are pre-emptible - if "paying jobs" need slots, pre-emptible jobs will be killed)
  • L1
  • other-pipelines
  • users 

L1 and other-pipelines are restricted to known pipelines. Non-default repos have quality of service (qos) defaulting to normal (non-pre-emptible).

At time of writing, there is no accounting yet. When that is enabled, we'll have to decide how to split up our allocation into the various repos.

S3DF Slurm organizes the different hardware resource type under Slurm partitions. Slurm doesn't have the concept of batch queue. Users can specify the resource their job needs (because, for example a 12-core CPU request can be satisfied by different types of CPUs). The following is an example script that submits a job to Slurm:

#SBATCH --account=fermi:users
##SBATCH --partition=milano
#SBATCH --job-name=my_first_job
#SBATCH --output=output-%j.txt
#SBATCH --error=output-%j.txt
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=2
#SBATCH --mem-per-cpu=4g
#SBATCH --time=0-00:10:00

Note that the specifying "--gpus a100:1" option is preferred over the specifying "–partition=ampere" (the latter is not needed). If GPU is not requested, you job will not have access to a GPU even if it is landed on an ampere node.

titleUsing cron

There is now a dedicated machine for cron: sdfcron001. Cron has been disabled on all other nodes. See:

You might want to keep a backup of your crontab file

You can run cronjobs in S3DF. Users don't have to worry about token expiration like on AFS. Select one of the iana interactive nodes (and remember which one!) to run on.

Note: crontab does NOT inherit your environment. You'll need to set that up yourself.

Since crontab is per host (no trscrontab), if the node is reinstalled or removed, the crontab will be lost. It's probably best to save your crontab as a file in your home directory so that you can re-add your cronjobs if this happens:

crontab -l > ~/crontab.backup

Then to re-add the jobs back in:

crontab ~/crontab.backup