Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

tcpClient hps11 'tsDisableTriggerSource(0)'
tcpClient hps11 'tsEnableTriggerSource()'

Notes:

 - the "tcpClient" commands above will also pause  and restart  data taking! This is relevant for the Stepan's instructions on the run TWiki regarding the beam pause for under 40 min.

 - According to Cameron: the commands can be run on any clon machine when logged in as hpsrun.

Hall B run Database

This is the link to the Hall B run Database to find all the necessary run specifications

...

6) Check the Flange Epics adl screen (I think this meant to be SVT  - > "FEB Main" GUI instread. --VF)

  • If some FEBs show If some FEBs show red values of voltages / currents it's likely they lost clock. Power-cycle those FEBs only.
    • Turn off the Febs DIGI, ANAP and ANAN
    • When OFF, turn up DIGI and wait until it comes to ~1A (Additionally some output will be shown in the terminal where Rogue process is running, confirming that communication is established)
    • When DIGI is UP and connection to Rogue is established, turn on ANAP and ANAN
  • If all FEBs are green proceed with (7)

7) Set PollEn to True and wait for all FEB links to become True9) Turn on the Hybrids with the global buttonTrue

8) If all True and no errors in Rogue, LoadSettings  (see previous section)

...

  1.  From behind the hall gateway →  'ssh -XY -C clonfarm1'
  2. Access bash tools required for running environment scripts → 'scl enable devtoolset-8 bash' 
  3.  Setup environment to run fit jobs → 'source /data/hps/src/setupHpstrEnv.sh'
  4. Check setup → Enter 'which hpstr' into terminal. Should return "/data/hps/src/hpstr/install/bin/hpstr" if successfully setup.
  5. Navigate to offline fit job run directory → 'cd /data/hps/slac_svt/server/offlinePedestals'
  6. Inside this, make evio storage directory to hold data from clondaq7→ 'mkdir hps_0<run_number>'

2)COPY ONGOING RUN EVIOS FROM CLONDAQ7  COPY 10 EVIO FILES FROM IFARM CACHE TO CLONFARM1

  1. In the clonfarm1 terminal
  2. Copy 10 cache files to Clonfarm1 → 'scp <user>@ftp.jlab.org:/cache/hallb/hps/physrun2021/data/hps_0<run_number>/
  3. Open another terminal and from behind the hall gateway → 'ssh -XY -C clondaq7'
  4. Enter 'bash' into terminal to access bash tools 
  5. Navigate to ongoing run data staging → 'cd /data/stage7/hps_0<run_number>'
  6. Offline baseline fitting requires 30 evio files, so make sure enough files exist
  7. Copy 30 sequential files to clonfarm1:From inside the above data directory → 'scp hps_0<run_number>.evio.00000{m40..m+29} <user>@clonfarm1.jlab.org:49} /data/hps/slac_svt/server/offlinePedestals/<runhps_0<run_number>/'
  8. "first_file" = m

3) SETUP AND RUN JOBS ON CLONFARM1

3) COPY ONLINE BASELINE FROM IFARM TO CLONFARM1

  1. Open new terminal and ssh into ifarm 
  2. Navigate to online baselines → 'cd /work/hallb/hps/phys2021_svt_calibrations/'
  3. Find the "svt_<online_run_number>_cond.dat" file with <online_run_number> closest to, but less than, the current run being fit.
    1. For example, if you're trying to offline fit "Run 14596", locate online baseline file "svt_014581_cond.dat"
  4. From Clonfarm1 terminal, copy online baseline to clonfarm1 → 'scp <user>@ftp.jlab.org:/work/hallb/hps/phys2021_svt_calibrations/svt_0<number>_cond.dat /data/hps/slac_svt/server/offlinePedestals/online_baselines/'
    1. File may already by available in online_baselines, so check first.

4) SETUP AND RUN JOBS ON CLONFARM1

  1. In Clonfarm1 terminal
  2. Run evio → rawsvthits_hh jobs to create sample 0 histograms to fit baselines
    1. Locate run thresholds file that was used for the current run
      1.  Find file with <number> closest to but less than <run_number> being fit → '/data/hps/slac_svt/server/thresholds/svt_0<number>_thresholds<settings>.dat'
      2. Copy this file path to be used in next step.
    2. Modify 'vars.json' → 'vim
  3. Return to clonfarm1 terminal
  4. Run evio → rawsvthits_hh jobs to create sample0 histograms to fit baselines
    1. Modify 'vars.json' → 'vim /data/hps/slac_svt/server/offlinePedestals/vars.json'
      1. update "run_number" to match the current run
      2. update "first_file" to match the first file number from 2) step 5.b.
      3. save changes and close file
    2. Create jobs → 'source /data/hps/slac_svt/server/offlinePedestals/mkjobsvars.shjson'
      1. Run jobs → 'source run_pool.sh'
      2. When jobs finish, combine histograms into one file → 'hadd output/rawsvthits_<run_number>_hh.root output/rawsvthits_0<run_number>*.root'
      3. Wait for jobs to finish before proceeding to step 3.
      Run offline_baseline_fit jobs  
        1. update "run_number" to match the current run
        2. update "thresh" to match file from step 2.a.i.
        3. save changes and close file
      1. Create jobs → 'source /data/hps/slac_svt/server/offlinePedestals/mkjobs_blfits.sh'
      2. Run jobs → 'source run_pool_blfits.sh'
      3. When jobs finish, combine layer fits histograms into one file → 'hadd output/hpsrawsvthits_<run_number>_offline_baseline_fits.root output/hps_<run_number>_offline_baseline_layer*.root'

    4) GENERATE OFFLINE BASELINE AND THRESHOLD DATABASE FILES

      1. hh.root output/rawsvthits_0<run_number>*.root'
        1. NOTE the '0' before <run_number> has been removed from the hadded file name. You must do this!
      2. Wait for jobs to finish before proceeding to step 3.
    1. Run offline_baseline_fit jobs  
      1. Create jobs → 'source /data/hps/slac_svt/server/offlinePedestals/mkjobs_blfits.sh'
      2. Run jobs → 'source run_pool_blfits.sh'
      3. When jobs finish, combine layer fits into one file → 'hadd output/hps_<run_number>_offline_baseline_fits.root output/hps_<run_number>_offline_baseline_layer*.root'

    5) GENERATE OFFLINE BASELINE AND THRESHOLD DATABASE FILES

    1. Run python analysis script
      1. From clonfarm1 terminal
    2. Copy most recent online baseline run file to clonfarm1 
      1. Open new terminal, log in to ifarm, and navigate to '/work/hallb/hps/phys2021_svt_calibrations/'
      2. Find the "svt_<online_run_number>_cond.dat" file with run_number closest to, but less than, the current run being fit.
      3. Copy online baseline file to clonfarm1 → 'scp /work/hallb/hps/phys2021_svt_calibrations/svt_<online_run_number>_cond.dat> <user>@clonfarm1.jlab.org:/data/hps/slac_svt/server/offlinePedestals/output/'
    3. Run python analysis script
      1. Return to clonfarm1 directory /data/hps/slac_svt/server/offlinePedestals/
      2. Run python analysis → 'python3 offlineBaselineFitAnalysis.py -i output/hps_<run_number>_offline_baseline_fits.root -o output/hps_<run_number>_offline_baseline_fits_analysis.root -b outputonline_baselines/svt_<online_run_number>_cond.dat -csv dbo output/hps_<run_number>_offline_baselines.dat -thresh output/hps_<run_number>_offline_thresholds.dat'
        1. new offline baselines are located → output/hps_<run_number>_offline_baselines.dat
        2. new offline thresholds are located → output/hps_<run_number>_offline_thresholds.dat
    4. Cleanup the output directory
      1. 'rm -rf ./scratch/*' to clear job scratch dir
      2. 'mkdir output/<run_number>' and move "output/rawsvthits_0<run_number>_hh.root", "output/hps_<run_number>_offline_baseline_fits_analysis.root" and all ALL of the "<file>.dat" files into that output/<run_number> directory for safe-keeping.
      3. Remove everything elseall of the loose files associated with this run inside of output/