...
- From behind the hall gateway → 'ssh -XY -C clonfarm1'
- Access bash tools required for running environment scripts → 'scl enable devtoolset-8 bash'
- Setup environment to run fit jobs → 'source /data/hps/src/setupHpstrEnv.sh'
- Check setup → Enter 'which hpstr' into terminal. Should return "/data/hps/src/hpstr/install/bin/hpstr" if successfully setup.
- Navigate to offline fit job run directory → 'cd /data/hps/slac_svt/server/offlinePedestals'
- Inside this, make evio storage directory to hold data from clondaq7→ 'mkdir hps_0<run_number>'
2)COPY ONGOING RUN EVIOS FROM CLONDAQ7 COPY 10 EVIO FILES FROM IFARM CACHE TO CLONFARM1
- In the clonfarm1 terminal
- Copy 10 cache files to Clonfarm1 → 'scp <user>@ftp.jlab.org:/cache/hallb/hps/physrun2021/data/hps_0<run_number>/hps_0<run_number>.evio.000{40..49}
- Open another terminal and from behind the hall gateway → 'ssh -XY -C clondaq7'
- Enter 'bash' into terminal to access bash tools
- Navigate to ongoing run data staging → 'cd /data/stage7/hps_0<run_number>'
- Offline baseline fitting requires 30 evio files, so make sure enough files exist
- Copy 30 sequential files to clonfarm1:From inside the above data directory → 'scp hps_0<run_number>.evio.00{m..m+29} <user>@clonfarm1.jlab.org: /data/hps/slac_svt/server/offlinePedestals/<runhps_0<run_number>'"first_file" = m/'
3) SETUP AND RUN JOBS ON CLONFARM1 COPY ONLINE BASELINE FROM IFARM TO CLONFARM1
- Open new terminal and ssh into ifarm
- Navigate to online baselines → 'cd /work/hallb/hps/phys2021_svt_calibrations/'
- Find the "svt_<online_run_number>_cond.dat" file with <online_run_number> closest to, but less than, the current run being fit.
- For example, if you're trying to offline fit "Run 14596", locate online baseline file "svt_014581_cond.dat"
- From Clonfarm1 terminal, copy online baseline to clonfarm1 → 'scp <user>@ftp.jlab.org:/work/hallb/hps/phys2021_svt_calibrations/svt_0<number>_cond.dat /data/hps/slac_svt/server/offlinePedestals/online_baselines/'
- File may already by available in online_baselines, so check first.
4) SETUP AND RUN JOBS ON CLONFARM1
- In Clonfarm1 terminal
- Run evio → rawsvthits_hh jobs to create sample 0 histograms to fit baselines
- Locate run thresholds file that was used for the current run
- Find file with <number> closest to but less than <run_number> being fit → '
- Return to clonfarm1 terminal
- Run evio → rawsvthits_hh jobs to create sample0 histograms to fit baselines
- Modify 'vars.json' → 'vim /data/hps/slac_svt/server/offlinePedestals/vars.json'
- update "run_number" to match the current run
- update "first_file" to match the first file number from 2) step 5.b.
- save changes and close file
- Create jobs → 'source /data/hps/slac_svt/server/offlinePedestals/mkjobs.sh'
- Run jobs → 'source run_pool.sh'
- When jobs finish, combine histograms into one file → 'hadd output/rawsvthits_<run_number>_hh.root output/rawsvthits_0<run_number>*.root'
- Wait for jobs to finish before proceeding to step 3.
- Run offline_baseline_fit jobs
- Create jobs → 'source
- /data/hps/slac_svt/server/
offlinePedestals- thresholds/
mkjobs- svt_0<number>_
blfits- thresholds<settings>.
sh- dat'
- Run jobs → 'source run_pool_blfits.sh'
- When jobs finish, combine layer fits into one file → 'hadd output/hps_<run_number>_offline_baseline_fits.root output/hps_<run_number>_offline_baseline_layer*.root'
4) GENERATE OFFLINE BASELINE AND THRESHOLD DATABASE FILES
- Copy this file path to be used in next step.
- Modify 'vars.json' → 'vim /data/hps/slac_svt/server/offlinePedestals/vars.json'
- update "run_number" to match the current run
- update "thresh" to match file from step 2.a.i.
- save changes and close file
- Create jobs → 'source /data/hps/slac_svt/server/offlinePedestals/mkjobs.sh'
- Run jobs → 'source run_pool.sh'
- When jobs finish, combine histograms into one file → 'hadd output/rawsvthits_<run_number>_hh.root output/rawsvthits_0<run_number>*.root'
- NOTE the '0' before <run_number> has been removed from the hadded file name. You must do this!
- Wait for jobs to finish before proceeding to step 3.
- Run offline_baseline_fit jobs
- Create jobs → 'source /data/hps/slac_svt/server/offlinePedestals/mkjobs_blfits.sh'
- Run jobs → 'source run_pool_blfits.sh'
- When jobs finish, combine layer fits into one file → 'hadd output/hps_<run_number>_offline_baseline_fits.root output/hps_<run_number>_offline_baseline_layer*.root'
5) GENERATE OFFLINE BASELINE AND THRESHOLD DATABASE FILES
- Run python analysis script
- From
- Copy most recent online baseline run file from ifarm to clonfarm1
- Open new terminal, log in to ifarm, and navigate to '/work/hallb/hps/phys2021_svt_calibrations/'
- Find the "svt_<online_run_number>_cond.dat" file with run_number closest to, but less than, the current run being fit.
- Copy online baseline file to clonfarm1 → 'scp /work/hallb/hps/phys2021_svt_calibrations/svt_<online_run_number>_cond.dat <user>@clonfarm1.jlab.org:/data/hps/slac_svt/server/offlinePedestals/output/'
- Run python analysis script
- Return to clonfarm1 terminal directory /data/hps/slac_svt/server/offlinePedestals/
- Run python analysis → 'python3 offlineBaselineFitAnalysis.py -i output/hps_<run_number>_offline_baseline_fits.root -o output/hps_<run_number>_offline_baseline_fits_analysis.root -b outputonline_baselines/svt_<online_run_number>_cond.dat -csv dbo output/hps_<run_number>_offline_baselines.dat -thresh output/hps_<run_number>_offline_thresholds.dat'
- new offline baselines are located → output/hps_<run_number>_offline_baselines.dat
- new offline thresholds are located → output/hps_<run_number>_offline_thresholds.dat
- Cleanup the output directory
- 'rm -rf ./scratch/*' to clear job scratch dir
- 'mkdir output/<run_number>' and move "output/rawsvthits_0<run_number>_hh.root", "output/hps_<run_number>_offline_baseline_fits_analysis.root" and ALL of the "<file>.dat" files into that output/<run_number> directory for safe-keeping.
- Remove all of the loose files associated with this run inside of output/
...