Refer to the table below for the batch resources available in psana. Submit your job from an interactive node (where you land after doing ssh psana). All nodes in the queues listed below run RHEL7. By submitting from an interactive node, also running RHEL7, you will ensure that your job inherits a RHEL7 environment.
Note 1: Jobs for the current experiment can be submitted to fast feedback (FFB) queues, which allocate resources for the most recent experiments. The FFB queues in the tables below are for FEH experiments. NEH experiments can submit FFB jobs to the new Fast Feedback System, which is significantly faster and more modern than psana.
Note 2: Between Feb 1st and Feb 11th 2021, psana will operate with both the old LSF batch system and the new one, SLURM. During this time, the analysis team and the users will adapt to the new technology. After Feb 11th, all resources will be moved to SLURM queues and LSF will be retired.
Queue name | Node names (and number) on LSF queues | Node names (and number) on SLURM queues | Comments | Throughput | Cores/ Node | RAM [GB/node] | Time limit | ||
---|---|---|---|---|---|---|---|---|---|
psanaq | psana11xx, psana12xx,psana13xx | 60 | psana14xx | 20 | Primary psana queue | 40 | 12 | 24 | 48hrs |
psdebugq | psana11xx, psana12xx,psana13xx | 60 | psana14xx | 20 | SHORT DEBUGGING ONLY (preempts psanaq jobs) | 40 | 12 | 24 | 10min |
psanaidleq | psana11xx, psana12xx,psana13xx | 60 | psana14xx | 20 | Jobs preemptable by psanaq | 40 | 12 | 24 | 48hrs |
psfehq | psana16xx | 20 | psana15xx | 20 | Low priority FFB queue for the running experiment (off-shift, FEH experiments, preemptable by psfehhiprioq) | 40 | 16 | 128 | 24hrs |
psfehhiprioq | psana16xx | 20 | psana15xx | 20 | High priority FFB queue for running experiment (on-shift, FEH experiments, preempts psfehq) | 40 | 16 | 128 | 24hrs |
psanagpuq | N/A | 0 | psanagpu113-psanagpu118 | 6 | GPU nodes | 10 | 16 | 128 | 24hrs |
Queue name | Node names (and number) on LSF queues | Node names (and number) on SLURM queues | Comments | Throughput | Cores/ Node | RAM [GB/node] | Time limit | |||
---|---|---|---|---|---|---|---|---|---|---|
psanaq | N/A | 0 | psana11xx, psana12xx, psana13xx, psana14xx | 80 | Primary psana queue | 40 | 12 | 24 | 48hrs | |
psdebugq | N/A | 0 | psana11xx, psana12xx, psana13xx, psana14xx | 80 | SHORT DEBUGGING ONLY (preempts psanaq jobs) | 40 | 12 | 24 | 10min | |
psanaidleq | N/A | 0 | psana11xx, psana12xx, psana13xx, psana14xx | 80 | Jobs preemptable by psanaq | 40 | 12 | 24 | 48hrs | |
psfehq | N/A | 0 | psana15xx, psana16xx | 40 | Low priority FFB queue for the running experiment (off-shift, FEH experiments, preemptable by psfehhiprioq) | 40 | 16 | 128 | 24hrs | |
psfehhiprioq | N/A | 0 | psana16xx | 20 | High priority FFB queue for running experiment (on-shift, FEH experiments, preempts psfehq) | 40 | 16 | 128 | 24hrs | |
psanagpuq | N/A | 0 | psanagpu113-psanagpu118 | 6 | GPU nodes | 10 | 16 | 128 | 24hrs |
Information on submitting jobs to the SLURM system at LCLS can be found on this page: Submitting SLURM Batch Jobs
Information on the submitting jobs to the (deprecated) LSF batch system can be found on this page: Submitting LSF Batch Jobs
Information on the Automatic Run Processing system (ARP) can be found on this page: Automatic Run Processing. This is also usable at sites like NERSC and SDF.
A "cheat sheet" showing similar commands on LSF and SLURM can be found here: https://slurm.schedmd.com/rosetta.pdf