You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 92 Next »

In 2021 LCLS switch to the SLURM batch system.

Information on submitting jobs to the SLURM system at LCLS can be found on this page: Submitting SLURM Batch Jobs

Information on the Automatic Run Processing system (ARP) can be found on this page: Automatic Run Processing (ARP).  This is also usable at sites like NERSC and SDF.

A "cheat sheet" showing similar commands on LSF and SLURM can be found here: https://slurm.schedmd.com/rosetta.pdf

Refer to the table below for the batch resources available in psana. Submit your job from an interactive node (where you land after doing ssh psana). All nodes in the queues listed below run RHEL7. By submitting from an interactive node, also running RHEL7, you will ensure that your job inherits a RHEL7 environment.

Note 1: Jobs for the current experiment can be submitted to fast feedback (FFB) queues, which allocate resources for the most recent experiments. The FFB queues in the tables below are for LCLS-II experiments (TMO, RIX and UED). The FEH experiments (LCLS-I, including XPP) can submit FFB jobs to the new Fast Feedback System.

sQueue name

Node names on SLURM queuesNumber of Nodes

Comments

Throughput
[Gbit/s]

Cores/
Node

RAM [GB/node]

Time limit 

psanaqpsana11xx, psana12xx, psana13xx, psana14xx80Primary psana queue40122448hrs
psdebugqpsana11xx, psana12xx, psana13xx, psana14xx80SHORT DEBUGGING ONLY (preempts psanaq jobs)40122410min
psanaidleqpsana11xx, psana12xx, psana13xx, psana14xx80Jobs preemptable by psanaq40122448hrs

psfehq

psana15xx20

Large memory queue

401612812hrs

psfehhiprioq

psana15xx

20

(to be removed) Same nodes as psfehq.  High priority queue for running experiment (preempts psfehq)

401612812hrs
psanagpuqpsanagpu113-psanagpu1186GPU nodes101612848hrs
  • No labels