Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

In 2021 LCLS switch to the SLURM batch system.

Information on submitting jobs to the SLURM system at LCLS can be found on this page: Submitting SLURM Batch Jobs

Login first to pslogin (from SLAC) or to psexport (from anywhere). From there you can submit a job with the following command:

No Format

bsub -q psnehq -o <output file name> <job_script_command>

For example:

No Format

bsub -q psnehq -o ~/output/job.out my_program

This will submit a job (my_program) to the queue psnehq and write its output to a file named ~/output/job.out. You may check on the status of your jobs using the bjobs command.

Resource requirements can be specified using the "-R" option.  For example, to make sure that a job is run on a node with 1 GB (or more) of available memory, use the following: 

Information on the Automatic Run Processing system (ARP) can be found on this page: Automatic Run Processing (ARP).  This is also usable at sites like NERSC and SDF.

A "cheat sheet" showing similar commands on LSF and SLURM can be found here: https://slurm.schedmd.com/rosetta.pdf

Refer to the table below for the batch resources available in psana. Submit your job from an interactive node (where you land after doing ssh psana). All nodes in the queues listed below run RHEL7. By submitting from an interactive node, also running RHEL7, you will ensure that your job inherits a RHEL7 environment.

Note 1: Jobs for the current experiment can be submitted to fast feedback (FFB) queues, which allocate resources for the most recent experiments. The FFB queues in the tables below are for LCLS-II experiments (TMO, RIX and UED). The FEH experiments (LCLS-I, including XPP) can submit FFB jobs to the new Fast Feedback System.

Warning

As of February 2023, the offline compute resources have been consolidated into the psanaq. The priority queues have been removed.

sQueue name

Node names on SLURM queuesNumber of Nodes

Comments

Throughput
[Gbit/s]

Cores/
Node

RAM [GB/node]

Time limit 

psanaq

 psana15xx
 psana16xx

34Primary psana queue401612848hrs
psanagpuqpsanagpu113-psanagpu1186GPU nodes101612848hrs

...