Search/Navigation:
Related:
SLAC/EPP/HPS Public
Jefferson Lab/Hall B/HPS Run Wiki
S30XL-LESA/LDMX
From counting house as hpsrun (should already be done):
In terminal: daqvnc.py connect config hps-svt-2
In another terminal: daqvnc.py connect config hps-svt-3
Remotely (in Hall B gateway):
ssh -Y clonfarm2
vncviewer :2
ssh -Y clonfarm3
vncviewer :2
Remotely (outside jLab network):
|
Then:
Any crate related command should be issued by SVT experts only
SDK software installation to talk to the atca crate
.... INFO : clonfarm2 go..... |
SVT software is installed in
|
On clonfarm2 start the rogue server
|
The "epicsEn" flag is necessary to enable controls via Epics.
On clonfarm3 start the dummy server
|
If opening the Rogue GUIs for the first time, make sure all of the FEBs are turned off.
To take a CODA run, both the rogue server and a dummy server need to be started. To start the rogue server, first ssh into clonfarm2 and issue the following commands
source /usr/clas12/release/1.4.0/slac_svt_new/anaconda3/etc/profile.d/conda.sh conda activate rogue_5.9.3 cd /data/hps/slac_svt/server/heavy-photon-daq/software/scripts/ python SvtCodaRun.py --local --env JLAB --epicsEn
The dummy server runs on clonfarm3 and can be brought up as follows after ssh'ing into that machine
source /usr/clas12/release/1.4.0/slac_svt_new/anaconda3/etc/profile.d/conda.sh conda activate rogue_5.9.3 cd /data/hps/slac_svt/server/heavy-photon-daq/software/scripts/ python SvtCodaDummy.py --local --env JLAB --epicsEn
At this point, the FEBs and hybrids can be brought up via the medm GUIs.
Once the hardware has been powered up, you can initialize a run in CODA using the configuration PROD77_SVT and the config file hps_v2_svtOnly_noThresh.
CODA run control should be running in a VNC on clondaq7. If not, contact the DAQ expert.
The online monitoring can be started by
startSvtCheckout
Which is in the hpsrun user path.
We experience few issues with the DAQ infrastructure during run cycles. In particular, it's advised to reset the DPMs in these cases:
Usually it's done before anything else.
heavy-photon-daq/software/scripts/resetDataDpm.sh
If the DPMs do not come up after several resets, one might try a cold reset (but might create issues in the recovery)
heavy-photon-daq/software/scripts/coldResetDataDpm.sh
At this point, the FEBs and hybrids can be brought up via the medm GUIs.
This is in the event builder and indicates that we are dropping packets.
This is raised in RssiContributor.cc in the event_builder by acceptFrame thread
Effect:
rol BUSY
Causes: Dropped frames.
Fix: Reset COBs
This usually indicates that the FEBs lost clock
Effect: not possible to run
Causes: FEBs lost clock
Fix: Recycle FEBs and run control
1) If it is running, kill Rogue by deleting the GUI SvtCodaRun.py that is running in hpsrun-clonfarm2 (TigerVNC window).
2) In a terminal on hpsrun-clonfarm2: source resetDataDpms.sh
This is in clonfarm2:/data/hps/slac_svt/srver/heavy-photon-daq/software/scripts
3) In the terminal in which Rogue had been running, execute "cob_dump --all atca1
In the resulting print-out, look for all codes to be 0xea. If they do not all go
to 0xea, then execute cob_cold_reset atca1 and then cob_rce_reset atca1.
4) If FEBs are powered on, power them off in the GUI window svtFebMain.adl as follows:
At the top of the window next to "ALL FEB" do in this order
a) Turn ANAN off
b) Turn ANAP off
c) Turn ANAN off
5) Start Rogue (SvtCodaRun.py)
6) In the TigerVNC window where RunControl is (CODA), execute the Configure and Download. Doing this early, at this state, can prevent problems that sometimes occur if this is done after the SVT startup.
7) In svtFebMain.adl, at the top, next to ALL FEB, turn DIGI on. The wait until all the digi currents exceed 1.0 amp (about 10 seconds). Then in quick succession turn on in this order: ANAP followed by ANAN.
8) In the Rogue GUI SvtCodaRun.py Variables tab, set "Poll Enable" to True.
Watch below there for all of the links to turn "True".
To be sure, click "FebArray->AXIVersion->Uptime and watch to see that the Uptime is
incrementing every few seconds
9) Go to the HpsSVTDAQRoot tab in Rogue and click "Load Settings". Select the file
rce-test-eel.yml. Wait for that to complete (takes a few seconds).
10) Go back to the variables tab in Rogue and set "GlobalHybridPowerSwitch" to "On"
Wait for all the ANAP currents to settle.
If a FEB is not responding at this point, in Rogue set Poll Enable to "False" and
then power cycle just the bad FEB in the same sequence as described above, again
waiting for DIGI to go above 1 amp before turning on ANAP and ANAN.
11) Go again to the HpsSVTDAQRoot tab in Rogue and repeat the "Load Settings".
12) If all the FEBs look good (green) and SVT bias is on, then CODA should be ready
for "Prestart" followed by "Go"
Configuration for CODA is PROD77_SVT
Download the trigger configuration trigger/HPS/Run2021/Before_Sep16/hps_v2_svtOnly_noThr.trg
It is critical to be sure to reset the data DPMs before you do a baseline run:
heavy-photon-daq/software/scripts/resetDataDpm.sh
The clonfarm2 and/or clonfarm3 ROCs will often dump a bunch of nonsense for a bit before the run starts recording triggers, so don't get scared if you see it spewing garbage for a few seconds at the beginning of the run.
The data will end up on clondaq7, so move the data (via scp) from clondaq7:/data/stage_in/hpssvt_<run_number>/hpssvt_<run_number>.evio.00000 to clonfarm1:/data/hps/slac_svt/server/thresholds/run
Next process this data to produce a threshold file. Thresholds require the fw channel mapping so use (as hpsrun):
bash sconda crogue source /data/hps/src/setupHpstrEnv.sh cd /data/hps/slac_svt/server/thresholds/run hpstr /data/hps/src/hpstr/processors/config/evioSvtBl2D_cfg.py -i hpssvt_<run_number>.evio.00000 -o hpssvt_<run_number>_bl2d_fw.root -c fw python makeSvtThresholds.py -i hpssvt_<run_number>_bl2d_fw.root -o svt_<run_number>_thresholds2pt5sig_1pt5sigF5H1 cp svt_<run_number>_thresholds2pt5sig_1pt5sigF5H1.dat ../
Then update /usr/clas12/release/1.4.0/parms/trigger/HPS/Run2021/svt/svt_config.cnf to point to the new threshold file and make a log entry.
"Online baslines" can be produced by:
bash sconda crogue source /data/hps/src/setupHpstrEnv.sh cd /data/hps/slac_svt/server/thresholds/run hpstr /data/hps/src/hpstr/processors/config/evioSvtBl2D_cfg.py -i hpssvt_<run_number>.evio.00000 -o hpssvt_<run_number>_bl2d_sw.root -c sw python makeSvtCond.py -i hpssvt_<run_number>_bl2d_sw.root -o svt_<run_number>
The output svt_<run_number>_cond.dat is in the format needed to upload to the database. This file should be moved to ifarm:/work/hallb/hps/phys2021_svt_calibrations
They can be loaded into the db by logging into ifarm and doing:
cd /work/hallb/hps/phys2021_svt_calibrations ./load_calibrations.py -f svt_<run_number>_cond.dat -r <run_number>
This can also be used to load offline baselines.