Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The S3DF batch compute link describes the Slurm batch processing. Here we will give a short summary relevant for LCLS users.

  • a partition and slurm account should be specified when submitting jobs. The slurm account would be lcls:<experiment-name> e.g. lcls:xpp123456. The account  is used for keeping track of resource usage per experiment

    Code Block
    % sbatch -p milano --account lcls:xpp1234 ........
    • The account can also be set via an environment variable:

      No Format
      % SLURM_ACCOUNT=lcls:experiment  executable-to-run [args]
      or
      % export SLURM_ACCOUNT=lcls:experiment
      % executable-to-run [args]
  • In the S3DF by default memory is limited to 4GB/core. Usually that is not an issue as processing jobs use many core (e.g. a job with 64 cores would request 256GB memory)
  • the memory limit will be enforced and your job will fail with an OUT_OF_MEMORY status
  • memory can be increased using the--memsbatch option
  • Default total run time is 1 day, the --time option allows to increase/decrease it.

Jupyter

Jupyter is provided by the onDemand service (We are not planning to run a standalone jupyterhub as is done at PCDS. For more information check the S3DF interactive compute docs).

...