Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The commands specified in the script file will be ran on the first available compute node that fits the resources requested.

The following is a sample submission script (tst_script)simple submission script of a parallel psana batch job run with mpi.  It can be submitted with the command "sbatch submit.sh":

Code Block
psanagpu101:~$ more submit.sh
#!/bin/bash

#SBATCH --partition=anagpu
#SBATCH --ntasks=4
#SBATCH --ntasks-per-node=2
#SBATCH --output=%j.log


# -u flushes print statements which can otherwise be hidden if mpi hangs
mpirun python -u /reg/g/psdm/tutorials/examplePython/mpiDataSource.py

 

This script shows some additional features controllable via slurm:

Code Block
titlesbatch
> cat tst_script 
#!/bin/bash
#
#SBATCH --job-name=‘name’ # Job name for allocation
#SBATCH --output=‘filename’ # File to which STDOUT will be written, %j inserts jobid
#SBATCH --error=‘filename’ # File to which STDERR will be written, %j inserts jobid
#SBATCH --partition=anagpu # Partition/Queue to submit job
#SBATCH --gres=gpu:1080ti:1 # Number of GPUs
#SBATCH --ntask=8  # Total number of tasks
#SBATCH --ntasks-per-node=4 # Number of tasks per node
#SBATCH --mail-user='username'@slac.stanford.edu # Receive e-mail from slurm
#SBATCH --mail-type=ALL # Type of e-mail from slurm; other options are: Error, Info.
#
srun -l hostname
srun python ExampleMultipleChaperones.py


> sbatch tst_script 
Submitted batch job 187

...