Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

source /usr/clas12/release/1.4.0/slac_svt_new/V3.4.0/i86-linux-64/tools/envs-sdk.sh
 
examples:
cob_dump --all atca1  => dumps the RCE status (booted is 0xea)
cob_rce_reset atca1 ==> resets all the RCEs
cob_rce_reset atca1/1/0/2  ==> resets a particular dpm (in this case dpm1)
cob_cold_data_resreset


et atca1 ==> "power cycles" the RCEs (sometimes they do not come back up nicely so rce_reset might be needed after)

....


INFO : clonfarm2 go.....


...

tcpClient hps11 'tsDisableTriggerSource(0)'
tcpClient hps11 'tsEnableTriggerSource()'

Notes:

 - the "tcpClient" commands above will also pause  and restart  data taking! This is relevant for the Stepan's instructions on the run TWiki regarding the beam pause for under 40 min.

 - According to Cameron: the commands can be run on any clon machine when logged in as hpsrun.

Hall B run Database

This is the link to the Hall B run Database to find all the necessary run specifications

...

 – otherwise:  execute cob_cold_data_reset atca1 and then cob_rce_reset atca1.

...

6) Check the Flange Epics adl screen (I think this meant to be SVT  - > "FEB Main" GUI instread. --VF)

  • If If some FEBs show red values of voltages / currents it's likely they lost clock. Power-cycle those FEBs only.
    • Turn off the Febs DIGI, ANAP and ANAN
    • When OFF, turn up DIGI and wait until it comes to ~1A (Additionally some output will be shown in the terminal where Rogue process is running, confirming that communication is established)
    • When DIGI is UP and connection to Rogue is established, turn on ANAP and ANAN
  • If all FEBs are green proceed with (7)

7) Set PollEn to True and wait for all FEB links to become True9) Turn on the Hybrids with the global buttonTrue

8) If all True and no errors in Rogue, LoadSettings  (see previous section)

...

5) If one motor says "Uninitialized" click on Initialize and then Home. The motor will be sent home and recalibratedre-calibrated

6) If was just a transient error, this should clear up and recover the motor functionality.


Offline Baseline+Thresholds Procedure

APV25 channel pedestals shift with occupancy, and can significantly change after beam is introduced, in comparison to an online baseline run (no beam) in channels close to the beam, especially in the first 2 layers. Additionally, the pedestals can change over time with radiation exposure, so the more time that elapses between online baseline runs, the more likely the high occupancy channel pedestals may no longer be consistent. For these reasons, we have an offline baseline fitting tool to extract the pedestals from production run data.

TO FIT OFFLINE BASELINES AND GENERATE A BASELINE AND THRESHOLD DATABASE FILE FOR A RUN:

1) SETUP OFFLINE FIT RUN DIRECTORY ON CLONFARM1

  1.  From behind the hall gateway →  'ssh -XY -C clonfarm1'
  2. Access bash tools required for running environment scripts → 'scl enable devtoolset-8 bash' 
  3.  Setup environment to run fit jobs → 'source /data/hps/src/setupHpstrEnv.sh'
  4. Check setup → Enter 'which hpstr' into terminal. Should return "/data/hps/src/hpstr/install/bin/hpstr" if successfully setup.
  5. Navigate to offline fit job run directory → 'cd /data/hps/slac_svt/server/offlinePedestals'
  6. Inside this, make evio storage directory to hold data from clondaq7→ 'mkdir hps_0<run_number>'

2) COPY 10 EVIO FILES FROM IFARM CACHE TO CLONFARM1

  1. In the clonfarm1 terminal
  2. Copy 10 cache files to Clonfarm1 → 'scp <user>@ftp.jlab.org:/cache/hallb/hps/physrun2021/data/hps_0<run_number>/hps_0<run_number>.evio.000{40..49} /data/hps/slac_svt/server/offlinePedestals/hps_0<run_number>/'

3) COPY ONLINE BASELINE FROM IFARM TO CLONFARM1

  1. Open new terminal and ssh into ifarm 
  2. Navigate to online baselines → 'cd /work/hallb/hps/phys2021_svt_calibrations/'
  3. Find the "svt_<online_run_number>_cond.dat" file with <online_run_number> closest to, but less than, the current run being fit.
    1. For example, if you're trying to offline fit "Run 14596", locate online baseline file "svt_014581_cond.dat"
  4. From Clonfarm1 terminal, copy online baseline to clonfarm1 → 'scp <user>@ftp.jlab.org:/work/hallb/hps/phys2021_svt_calibrations/svt_0<number>_cond.dat /data/hps/slac_svt/server/offlinePedestals/online_baselines/'
    1. File may already by available in online_baselines, so check first.

4) SETUP AND RUN JOBS ON CLONFARM1

  1. In Clonfarm1 terminal
  2. Run evio → rawsvthits_hh jobs to create sample 0 histograms to fit baselines
    1. Locate run thresholds file that was used for the current run
      1.  Find file with <number> closest to but less than <run_number> being fit → '/data/hps/slac_svt/server/thresholds/svt_0<number>_thresholds<settings>.dat'
      2. Copy this file path to be used in next step.
    2. Modify 'vars.json' → 'vim /data/hps/slac_svt/server/offlinePedestals/vars.json'
      1. update "run_number" to match the current run
      2. update "thresh" to match file from step 2.a.i.
      3. save changes and close file
    3. Create jobs → 'source /data/hps/slac_svt/server/offlinePedestals/mkjobs.sh'
    4. Run jobs → 'source run_pool.sh'
    5. When jobs finish, combine histograms into one file → 'hadd output/rawsvthits_<run_number>_hh.root output/rawsvthits_0<run_number>*.root'
      1. NOTE the '0' before <run_number> has been removed from the hadded file name. You must do this!
    6. Wait for jobs to finish before proceeding to step 3.
  3. Run offline_baseline_fit jobs  
    1. Create jobs → 'source /data/hps/slac_svt/server/offlinePedestals/mkjobs_blfits.sh'
    2. Run jobs → 'source run_pool_blfits.sh'
    3. When jobs finish, combine layer fits into one file → 'hadd output/hps_<run_number>_offline_baseline_fits.root output/hps_<run_number>_offline_baseline_layer*.root'

5) GENERATE OFFLINE BASELINE AND THRESHOLD DATABASE FILES

  1. Run python analysis script
    1. From clonfarm1 terminal directory /data/hps/slac_svt/server/offlinePedestals/
    2. Run python analysis → 'python3 offlineBaselineFitAnalysis.py -i output/hps_<run_number>_offline_baseline_fits.root -o output/hps_<run_number>_offline_baseline_fits_analysis.root -b online_baselines/svt_<online_run_number>_cond.dat -dbo output/hps_<run_number>_offline_baselines.dat -thresh output/hps_<run_number>_offline_thresholds.dat'
      1. new offline baselines are located → output/hps_<run_number>_offline_baselines.dat
      2. new offline thresholds are located → output/hps_<run_number>_offline_thresholds.dat
  2. Cleanup the output directory
    1. 'rm -rf ./scratch/*' to clear job scratch dir
    2. 'mkdir output/<run_number>' and move "output/rawsvthits_0<run_number>_hh.root", "output/hps_<run_number>_offline_baseline_fits_analysis.root" and ALL of the "<file>.dat" files into that output/<run_number> directory for safe-keeping.
    3. Remove all of the loose files associated with this run inside of output/