Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

In 2021 LCLS switch to the SLURM batch system.

Information on submitting jobs to the SLURM system at LCLS can be found on this page: Submitting SLURM Batch Jobs

Information on the Automatic Run Processing system (ARP) can be found on this page: Automatic Run Processing (ARP).  This is also usable at sites like NERSC and SDF.

A "cheat sheet" showing similar commands on LSF and SLURM can be found here: https://slurm.schedmd.com/rosetta.pdf

...

Refer to the table below for the batch resources available in psana. Submit your job from an interactive node (where you land after doing ssh psana). All nodes in the queues listed below run RHEL7. By submitting from an interactive node, also running RHEL7, you will ensure that your job inherits a RHEL7 environment.

Note 1: Jobs for the current experiment can be submitted to fast feedback (FFB) queues, which allocate resources for the most recent experiments. The FFB queues in the tables below are for FEH experiments. NEH experiments LCLS-II experiments (TMO, RIX and UED). The FEH experiments (LCLS-I, including XPP) can submit FFB jobs to the new Fast Feedback System, which is significantly faster and more modern than psana.

Note 2: Between Feb 1st and Feb 11th 2021, psana will operate with both the old LSF batch system and the new one, SLURM. During this time, the analysis team and the users will adapt to the new technology. After Feb 11th, all resources will be moved to SLURM queues and LSF will be retired.

Between Feb 1st and Feb 11th 2021

.

Warning

As of February 2023, the offline compute resources have been consolidated into the psanaq. The priority queues have been removed.

sQueue name

Node names on SLURM queuesNumber of Nodes

Queue name

Node names (and number) on LSF queues

Node names (and number) on SLURM queues

Comments

Throughput
[Gbit/s]

Cores/
Node

RAM [GB/node]

Time limit 

psanaq
psana11xx, psana12xx,psana13xx60psana14xx20Primary psana queue40122448hrspsdebugqpsana11xx, psana12xx,psana13xx60psana14xx20SHORT DEBUGGING ONLY (preempts psanaq jobs)40122410minpsanaidleqpsana11xx, psana12xx,psana13xx
60psana14xx20Jobs preemptable by psanaq40122448hrs

psanafehq

psana16xx

20psana15xx20Low priority FFB queue for the running experiment (off-shift, FEH experiments, preemptable by psanafehhiprioq)

 psana15xx
 psana16xx

34Primary psana queue4016128
24hrs

psanafehhiprioq

psana16xx

20psana15xx20

High priority FFB queue for running experiment (on-shift, FEH experiments, preempts psanafehq)

401612824hrspsanagpuqN/A0
48hrs
psanagpuqpsanagpu113-psanagpu1186GPU nodes1016128
24hrs

After Feb 11th 2021

Queue name

Node names (and number) on LSF queuesNode names (and number) on SLURM queues

Comments

Throughput
[Gbit/s]

Cores/
Node

RAM [GB/node]

Time limit 

psanaqN/A0psana11xx, psana12xx, psana13xx, psana14xx80Primary psana queue40122448hrspsdebugqN/A0psana11xx, psana12xx, psana13xx, psana14xx80SHORT DEBUGGING ONLY (preempts psanaq jobs)40122410minpsanaidleqN/A0psana11xx, psana12xx, psana13xx, psana14xx80Jobs preemptable by psanaq401224
48hrs

psanafehq

N/A0psana15xx, psana16xx40

Low priority FFB queue for the running experiment (off-shift, FEH experiments, preemptable by psanafehhiprioq)

401612824hrs

psanafehhiprioq

N/A0

psana16xx

20

High priority FFB queue for running experiment (on-shift, FEH experiments, preempts psanafehq)

401612824hrspsanagpuqN/A0psanagpu113-psanagpu1186GPU nodes101612824hrs

Submitting Jobs

Information on submitting jobs to the SLURM system at LCLS can be found on this page: Submitting SLURM Batch Jobs

Information on the submitting jobs to the (deprecated) LSF batch system can be found on this page: Submitting LSF Batch Jobs

Information on the Automatic Run Processing system (ARP) can be found on this page: Automatic Run Processing.  This is also usable at sites like NERSC and SDF.

...