Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

No Format
% source /sdf/group/lcls/ds/ana/sw/conda2/manage/bin/psconda.sh

Batch processing

The S3DF batch compute link describes the Slurm batch processing. Here we will give a short summary relevant for LCLS users.

  • a partition and slurm account should be specified when submitting jobs. The slurm account would be lcls:<experiment-name> e.g. lcls:xpp123456. The account  is used for keeping track of resource usage per experiment

    Code Block
    % sbatch -p milano --account lcls:xpp1234 ........
  • In the S3DF by default memory is limited to 4GB/core. Usually that is not an issue as processing jobs use many core (e.g. a job with 64 cores would request 256GB memory)
    • the memory limit will be enforced and your job will fail with an OUT_OF_MEMORY status
    • memory can be increased using the --mem sbatch option

Jupyter

Jupyter is provided by the onDemand service (We are not planning to run a standalone jupyterhub as is done at PCDS. For more information check the S3DF interactive compute docs).

...