You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 8 Next »

Start SVT DAQ VNC

From counting house as hpsrun (should already be done):

In terminal: daqvnc.py connect config hps-svt-2

In another terminal: daqvnc.py connect config hps-svt-3


Remotely (in Hall B gateway):

ssh -Y clonfarm2
vncviewer :2

ssh -Y clonfarm3
vncviewer :2


Remotely (outside jLab network):

#tunnel to login.jlab.org. This opens a new xterm with an open tunnel to login

xterm -e "ssh -N -L <port>:hallgw4.jlab.org:22 <username>@login.jlab.org" &

#Tunnel to a machine behind the firewall spawning a top process to keep the tunnel open. First put gatewayPin+OTP then pwd to the machine behind the firewall

xterm -e "ssh -t -p <port> -L 5902:localhost:5902 <username>@localhost \ \" ssh -t -L 5902:localhost:5902 clonfarm2.jlab.org \ \"top\" \" " &

Then:

  • On Linux: vncviewer localhost:2
  • On Mac: open screen sharing and connect to localhost:5902

Software installation locations (11 Ago 2021)

(minus) Any crate related command should be issued by SVT experts only

SDK software installation to talk to the atca crate

source /usr/clas12/release/1.4.0/slac_svt_new/V3.4.0/i86-linux-64/tools/envs-sdk.sh
 
examples:
cob_dump --all atca1  => dumps the RCE status (booted is 0xea)
cob_rce_reset atca1 ==> resets all the RCEs
cob_rce_reset atca1/1/0/2  ==> resets a particular dpm (in this case dpm1)
cob_cold_data_reset atca1 ==> "power cycles" the RCEs (sometimes they do not come back up nicely so rce_reset might be needed after)

....


INFO : clonfarm2 go.....



SVT software is installed in

/data/hps/slac_svt/
=> diskless is exported to the cobs via nfs (server hosted on clonfarm1, NFS has to be v2 !! important )
=> daq is exported to the cobs via nfs. Has been compiled on DTM1
=> server contains the current software installation


Start rogue servers

On clonfarm2 start the rogue server


source /usr/clas12/release/1.4.0/slac_svt_new/anaconda3/etc/profile.d/conda.sh
conda activate rogue_5.9.3
cd /data/hps/slac_svt/server/heavy-photon-daq/software/scripts/
python SvtCodaRun.py  --local --env JLAB --epicsEn


The "epicsEn" flag is necessary to enable controls via Epics.

On clonfarm3 start the dummy server


python SvtCodaDummy.py --local --env JLAB


Taking a CODA run

If opening the Rogue GUIs for the first time, make sure all of the FEBs are turned off. 


To take a CODA run, both the rogue server and a dummy server need to be started.   To start the rogue server, first ssh into clonfarm2 and issue the following commands

source /usr/clas12/release/1.4.0/slac_svt_new/anaconda3/etc/profile.d/conda.sh
conda activate rogue_5.9.3
cd /data/hps/slac_svt/server/heavy-photon-daq/software/scripts/
python SvtCodaRun.py  --local --env JLAB --epicsEn

The dummy server runs on clonfarm3 and can be brought up as follows after ssh'ing into that machine

source /usr/clas12/release/1.4.0/slac_svt_new/anaconda3/etc/profile.d/conda.sh
conda activate rogue_5.9.3
cd /data/hps/slac_svt/server/heavy-photon-daq/software/scripts/
python SvtCodaDummy.py  --local --env JLAB --epicsEn

At this point, the FEBs and hybrids can be brought up via the medm GUIs.


Once the hardware has been powered up, you can initialize a run in CODA using the configuration PROD77_SVT and the config file hps_v2_svtOnly_noThresh.

(minus) CODA run control should be running in a VNC on clondaq7.  If not, contact the DAQ expert.  



Start the online monitor

startSVTCheckout


Reset the dataDPM

heavy-photon-daq/software/scripts/resetDataDpm.sh


If the DPMs do not come up after several resets, one might try a cold reset (but might create issues in the recovery)

heavy-photon-daq/software/scripts/coldResetDataDpm.sh

At this point, the FEBs and hybrids can be brought up via the medm GUIs.




Coda / Rogue Errors

1) RssiContributor::acceptFrame[X]: error mistatch in batcher sequence 000000dc : 00000000

This is in the event builder and indicates that we are dropping packets.

This is raised in RssiContributor.cc in the event_builder by acceptFrame thread


Effect:

rol BUSY

Causes: Dropped frames.

Fix: Reset COBs

2) ROL crashes during download→prestart phase showing readback errors on the FEBs

This usually indicates that the FEBs lost clock

Effect: not possible to run

Causes: FEBs lost clock

Fix: Recycle FEBs and run control

  • No labels