Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Start SVT DAQ VNC

From counting house as hpsrun (should already be done):

In terminal: daqvnc.py connect config hps-svt-2

In another terminal: daqvnc.py connect config hps-svt-3


Remotely (in Hall B gateway):

ssh -Y clonfarm2
vncviewer :2

ssh -Y clonfarm3
vncviewer :2

Software installation locations (11 Ago 2021)


SDK software installation to talk to the atca crate

source /usr/clas12/release/1.4.0/slac_svt_new/V3.4.0/i86-linux-64/tools/envs-sdk.sh
 
examples:
cob_dump --all atca1  => dumps the RCE status (booted is 0xea)
cob_rce_reset atca1 ==> resets all the RCEs
cob_rce_reset atca1/1/0/2  ==> resets a particular dpm (in this case dpm1)
cob_cold_data_reset atca1 ==> "power cycles" the RCEs (sometimes they do not come back up nicely so rce_reset might be needed after)


SVT software is installed in

/data/hps/slac_svt/
=> diskless is exported to the cobs via nfs (server hosted on clonfarm1, NFS has to be v2 !! important )
=> daq is exported to the cobs via nfs. Has been compiled on DTM1
=> server contains the current software installation


Start rogue servers

On clonfarm2 start the rogue server


source /usr/clas12/release/1.4.0/slac_svt_new/anaconda3/etc/profile.d/conda.sh
conda activate rogue_5.9.3
cd /data/hps/slac_svt/server/heavy-photon-daq/software/scripts/
python SvtCodaRun.py  --local --env JLAB --epicsEn


The "epicsEn" flag is necessary to enable controls via Epics.

On clonfarm3 start the dummy server


python SvtCodaDummy.py --local --env JLAB


Start data taking with Coda

Here are the instructions to how to start Coda with clasrun user

python SvtCodaDummy.py --local --env JLAB
1) Open a terminal on clonfarm3
> bash
> sconda    #setup conda
> crogue    #activates rogue environment
> startCodaRun    #starts Rogue Coda Run Gui
 
2) Open a terminal on clonfarm3
> bash
> sconda; crogue; startCodaDummy   #starts the dummy rogue server
 
3) Open a terminal on clonfarm3
> runcontrol -rocs
 
Then:
 
connectconfigure (hdice1_clonfarm1_clon10new to run with both cobs, hdice1_clonfarm1 to run with top cob, hdice1_clon10new to run with bottom cob) download (for both cobs use hps_v2_noThr.trg)PrestartGo

Coda informations:

We are using conda release 1.4.0, installed in

/usr/clas12/release/1.4.0
 
HPS trigger configurations are stored in
 
/usr/clas12/release/1.4.0/parms/trigger/HPS/

Update Coda libraries

If event_builder or rogue_lite change, coda libraries need to be updates. Just run


sh /data/hps/slac_svt/copy_libraries.sh



Coda / Rogue Errors

1) RssiContributor::acceptFrame[X]: error mistatch in batcher sequence 000000dc : 00000000

This is in the event builder and indicates that we are dropping packets.

This is raised in RssiContributor.cc in the event_builder by acceptFrame thread


Effect:

rol BUSY

Causes: Dropped frames.

Fix: Reset COBs

2) ROL crashes during download→prestart phase showing readback errors on the FEBs

This usually indicates that the FEBs lost clock

Effect: not possible to run

Causes: FEBs lost clock

Fix: Recycle FEBs and run control