Code Block |
---|
| (ana-4.0.30-py3) -bash-4.2$ ./arp_scripts/submit_smd.sh -h
submit_smd.sh:
Script to launch a smalldata_tools run analysis
OPTIONS:
-h|--help
Definition of options
-e|--experiment
Experiment name (i.e. cxilr6716)
-r|--run
Run Number
-d|--directory
Full path to directory for output file
-n|--nevents
Number of events to analyze
-q|--queue
Queue to use on SLURM
-c|--cores
Number of cores to be utilized
-f|--full
If specified, translate everything
-D|--default
If specified, translate only smalldata
-i|--image
If specified, translate everything & save area detectors as images
--norecorder
If specified, don't use recorder data
--nparallel
Number of processes per node
--postTrigger
Post that primary processing done to elog to seconndary jobs can start
--interactive
Run the process live w/o batch system |
|
We will usually set up the production to run automatically through the ARP. The number of jobs is tuned to use as few cores as necessary to process data at data taking speed to keep the time before the files are available to a minimum while keep the the queue as empty as possible. This is useful in cases where the reduction parameters are stable for sets of runs (most of the XPP and XCS experiments fall into this category).Setup and automatic production
During the experiment we will produce the smallData files automatically. Since Run 18, we are using the Automatic Run Processing for that. During experimental setup, we usually take test runs of the appropriate length to set up production to finish close to the end of run. Jobs can be rerun and stopped. The job will print out where is it. We can set up a second job, started when the hdf5 production is done that can either make data quality plots or produce the binned data.