You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 12 Next »

Slurm manages resources (CPUs, GPUs)

Update 20240312

  • We think job steps (srun) will be able to satisfy the features (listed below) that we need to replace procmgr. 
  • Job array isn't a good option because it seems to fit with single cmd running in parallel model (we have different command/ environment for each process). 
  • We tested following features for srun:
    1. If one step dies, what happen? → other steps continue
    2. Each job step can have independent --output and --error to send their output and error to. Note that anything related to slurm error will be written sbatch --output file. 
    3. Can we use squeue/scontrol to check job step details? –> yes. Below shows job details of a single job with three job steps.

      (ps-4.6.3) sacct -j 41308291
      JobID           JobName  Partition    Account  AllocCPUS      State ExitCode
      ------------ ---------- ---------- ---------- ---------- ---------- -------- 
      41308291       parallel     milano  lcls:data          6    RUNNING      0:0
      41308291.ba+      batch             lcls:data          6    RUNNING      0:0
      41308291.ex+     extern             lcls:data          6    RUNNING      0:0
      41308291.0       hello2             lcls:data          2    RUNNING      0:0
      41308291.1       hello3             lcls:data          2    RUNNING      0:0
      41308291.2       hello1             lcls:data          2    RUNNING      0:0
    4. Can we start/cancel individual steps? → you can cancel individual step 

      (ps-4.6.3) scancel 41308291.1
      (ps-4.6.3) sacct -j 41308291
      JobID           JobName  Partition    Account  AllocCPUS      State ExitCode
      ------------ ---------- ---------- ---------- ---------- ---------- -------- 
      41308291       parallel     milano  lcls:data          6    RUNNING      0:0
      41308291.ba+      batch             lcls:data          6    RUNNING      0:0
      41308291.ex+     extern             lcls:data          6    RUNNING      0:0
      41308291.0       hello2             lcls:data          2    RUNNING      0:0
      41308291.1       hello3             lcls:data          2 CANCELLED+      0:9
      41308291.2       hello1             lcls:data          2    RUNNING      0:0

      You can only restart the entire job: 

      (ps-4.6.3) scontrol requeue 41308291
    5. (DREAM) Can we specify resource (eg. ask for a gpu node)? → yes 

      srun --partition ampere --account lcls:data -n 1 --time=00:10:00 --gpus=1 --pty /bin/bash

      Note on viewing slurm cluster info: 

      monarin@sdfiana002 ~ sinfo -o "%20N  %10c  %10m  %95f  %10G "
      NODELIST              CPUS        MEMORY      AVAIL_FEATURES                                                                                   GRES
      sdfrome[003-123]      128         512000      CPU_GEN:RME,CPU_SKU:7702,CPU_FRQ:2.00GHz                                                         (null)
      sdfmilan[001-072,101  128         512000      CPU_GEN:RME,CPU_SKU:7713,CPU_FRQ:2.00GHz                                                         (null)
      sdfampere[001-023]    128         1024000     CPU_GEN:RME,CPU_SKU:7542,CPU_FRQ:2.10GHz,GPU_GEN:AMP,GPU_SKU:A100,GPU_MEM:40GB,GPU_CC:8.0        gpu:a100:4
      sdfturing[001-016]    48          191552      CPU_GEN:SKX,CPU_SKU:5118,CPU_FRQ:2.30GHz,GPU_GEN:TUR,GPU_SKU:RTX2080TI,GPU_MEM:11GB,GPU_CC:7.5   gpu:geforc
      more info about sinfo format:
      https://slurm.schedmd.com/sinfo.html#SECTION_EXAMPLES
    6. (DREAM) Can we swithc the BOS connection when the resources have been allocated"


First person to talk to is Ric

In principle we have our own customizable slurm installation on drp-srcf-*, but might still need work/tweaking from IT.

Can we extend slurm to manage our resources, determined by kcu firmware? e.g.

  • rix timing system fiber connection
  • cameralink node
  • high-rate timing nodes
  • low-rate timing nodes (epics)
  • hsd nodes
  • generic (wave8) nodes

Slurm features to look into (could also look into other tools (airflow?) if necessary):

  • job arrays
  • job steps
  • heterogeneous job support

Conceptually want:

  • "sbatch tmo.cnf" (instead of "procmgr start tmo.cnf")
  • tmo.cnf has
    • typical timing system on cmp028: "drp -P tmo -D ts ..." 
    • typical control.py on mon001: "control -P tmo ..."
    • special localhost: "control_gui -P tmo -B DAQ:NEH" (this is unusual because it runs on the localhost and has a gui, only localhost processes have gui's with procmgr)
  • critical features (if we could do this we may be able to replace procmgr):
    • keep node allocations and command lines hardwired (like existing .cnf)
    • demo: run control/timing processes
    • need a replacement for procstat (could start with a command line version, do gui later)
    • if one job crashes, don't want the whole job to exit
    • different daq processes need different environment (pva detectors in particular, you can see this in the cnf)
    • would like python, not a bash interface for the users (maybe reuse existing cnf?)
    • need per-process log files
    • two people can't run the tmo daq at the same time (use slurm detail like nodelist and jobname or comment to check if another user trying to use the same detectors/ timing)
    • option to run each command in the cnf file as a single job or single job step
    • (this already exists, currently written into ~tmoopr/.psdaq, may need that permissions work right since now daq could run as "cpo"?) "activedet": remembering the previously selected detectors?  could we use "sacct"? maybe we could add info to sacct?  currently control.py puts the info in files in directories like ~tmoopr/.psdaq. UPDATE: It looks like control_gui stores the values (selected detectors) directly in configdb using zmq. We probably don't need anything new for this. See below for more detail (when running control_gui with --loglevel DEBUG):

      <D> 2024-03-13T13:43:18 psdaq.control_gui.QWPopupTableCheck: onApply
      <D> 2024-03-13T13:43:18 psdaq.control_gui.CGJsonUtils: control.setPlatform() json:
      {
        "control": {
          "0": {
            "active": 1,
            "hidden": 1,
            "control_info": {
              "xpm_master": 0,
              "pv_base": "DAQ:NEH",
              "cfg_dbase": "https://pswww.slac.stanford.edu/ws-auth/configdb/ws/configDB",
              "instrument": "tst",
              "slow_update_rate": 1
            },
            "proc_info": {
              "alias": "control",
              "host": "drp-srcf-cmp035",
              "pid": 34642
            }
          }
        },
        "teb": {
          "13749829319999405505": {
            "proc_info": {
              "alias": "teb0",
              "host": "drp-srcf-cmp035",
              "pid": 34640
            },
            "hidden": 1,
            "active": 1
  • nice-to-have features:
    • support interactive log file access like procstat
    • support live xterm for processes like procstat, and with .cnf "x" flag
    • support different conda envs for different detectors
  • the "dream":
    • understand which nodes are camlink nodes ("resource management")
    • dynamically allocate requested types of nodes
    • (hard, do as a second step?) would change the BOS connections so the right detectors were connected to the allocated nodes

Note from Ric 3/1/2024:

I thought that maybe the first thing to try would be to figure out how to launch a process that brings up a GUI, e.g., groupca or xpmpva, or maybe even start simpler with xeyes or xclock.  The main idea was to test the ability of telling slurm that the process that you want to run is an X11 application, which I read in the docs it can do.
The next thing might be to try to bring up the DAQ using slurm and thus thinking about what the slurm description file would look like.  Can we use something like the .cnf?  Can we automatically convert the .cnfs to whatever slurm requires?  Or do we need to start from scratch?  For this step I’m thinking we would still have to specify everything, like the node each process runs on.
The last thing I looked into a little bit was the idea of defining resources to slurm.  For this I thought I’d need some setup to try things out on, which resulted in Jira ECS-4017 (I don’t think anything was done though).  Chris Ford was also working on this project and he suggested setting up a virtual machine with a private slurm setup I could tinker with (I haven’t figured out how to do that, yet).  Anyway, the idea of the resources is that based on what each DRP needs (e.g., detector type, KCU firmware type, a GPU, X11, etc.), resources would be defined to slurm so that when you launch a DAQ, it would allocate the nodes according to the resources needed and start the processes on them.  Perhaps at some point in the future we could even have it modify the connections in the BOS to connect a detector to an available host that has the right KCU firmware, thus making RIX hosts available to TMO and vice versa.
I think that’s about as far as I got.  Let me know if you have questions.  I have to take Rachel to a doctor’s appointment at 1 so I think I’ll be out until 3 or so.  We can talk later, if you prefer.  I’ll take a look at the link as soon as I can.  Feel free to add the above to that if you think it would be helpful.

Running job steps in parallel

Here's an example script that shows how we can run job steps in parallel (with &). 

job_step.sh
#!/bin/bash
  
#SBATCH --job-name parallel   
#SBATCH --output slurm-%j.out   
#SBATCH --ntasks=3  		## number of tasks (analyses) to run
#SBATCH --cpus-per-task=2  	## the number of threads allocated to each task
#SBATCH --time=0-00:10:00  

# Execute job steps
srun --ntasks=1 --nodes=1 --cpus-per-task=$SLURM_CPUS_PER_TASK bash -c "sleep 2; echo 'hello 1'" &
srun --ntasks=1 --nodes=1 --cpus-per-task=$SLURM_CPUS_PER_TASK bash -c "sleep 4; echo 'hello 2'" &
srun --ntasks=1 --nodes=1 --cpus-per-task=$SLURM_CPUS_PER_TASK bash -c "sleep 8; echo 'hello 3'" &
wait

For s3df, this works as you can see that the three job steps start at the same time.

(ps-4.6.3) sacct -j 41486534 --format=JobID,Start,End,Elapsed,REQCPUS,ALLOCTRES%30
JobID                      Start                 End    Elapsed  ReqCPUS                      AllocTRES 
------------ ------------------- ------------------- ---------- -------- ------------------------------ 
41486534     2024-03-14T17:58:03 2024-03-14T17:58:12   00:00:09        6  billing=6,cpu=6,mem=6G,node=1 
41486534.ba+ 2024-03-14T17:58:03 2024-03-14T17:58:12   00:00:09        6            cpu=6,mem=6G,node=1 
41486534.ex+ 2024-03-14T17:58:03 2024-03-14T17:58:12   00:00:09        6  billing=6,cpu=6,mem=6G,node=1 
41486534.0   2024-03-14T17:58:04 2024-03-14T17:58:06   00:00:02        2            cpu=2,mem=2G,node=1 
41486534.1   2024-03-14T17:58:04 2024-03-14T17:58:08   00:00:04        2            cpu=2,mem=2G,node=1 
41486534.2   2024-03-14T17:58:04 2024-03-14T17:58:12   00:00:08        2            cpu=2,mem=2G,node=1

For drp nodes, running this script shows that the second or more job steps wait for the previous one to finish.

(ps-4.6.3) monarin@drp-srcf-eb001 (master) slurm sacct -j 625799 --format=JobID,Start,End,Elapsed,REQCPUS,ALLOCTRES%30 
       JobID               Start                 End    Elapsed  ReqCPUS                      AllocTRES 
------------ ------------------- ------------------- ---------- -------- ------------------------------ 
625799       2024-03-14T18:11:36 2024-03-14T18:11:45   00:00:09        6         billing=6,cpu=6,node=1 
625799.batch 2024-03-14T18:11:36 2024-03-14T18:11:45   00:00:09        6             cpu=6,mem=0,node=1 
625799.0     2024-03-14T18:11:36 2024-03-14T18:11:38   00:00:02        6            cpu=6,mem=6G,node=1 
625799.1     2024-03-14T18:11:38 2024-03-14T18:11:42   00:00:04        6            cpu=6,mem=6G,node=1 
625799.2     2024-03-14T18:11:42 2024-03-14T18:11:45   00:00:03        6            cpu=6,mem=6G,node=1

The example script above came from this page https://hpc.nmsu.edu/discovery/slurm/tasks/parallel-execution/. It also mentioned about hyperthreading and how this impacts slurm scheduling. I tried both changing --cpus-per-task=1 and --hint=nomultithread but nothing changed.


  • No labels