Page History
...
- run a batch job in s3df
- get real-time plots
- for the latest run by default
- if they go reanalyze old runs, that would be a user-selectable option
- could use X-forwarding from s3df to start, but ideally would use open-ports we requested (latter may be stalled)
- timescale: 2 months would be nice, 3 or 4 months is probably OK
how:
- psmon (zmq+pyqtgraph). eventually use “psplot” command-line to display the plots.
- murali’s elog database to learn which SRV node/port is serving the plots (most challenging)
- to start require only 1 SRV node
- if we eventually need more than 1 SRV node we would have to write another layer on top of SRV with 1 core collecting and serving the plots
- to make it easy for scientists:
- need some sort of script to poll the elog database for the most recent run (most challenging)
- if a run number goes backwards ignore it by defaults
...
- need to get "mpirun -n 5 python andor.py" working (psmon issue). hack of "if rank==4: publish.init()" seems to work. then can eliminate "publish.local=True" and use "psplot ANDOR" to get plots
- two shmem issues (why we use live-mode, not shmem):
- easy to drop events to shmem (don't want this for normalization)
- hard for ric's event builder to send groups of events reliably to one core of the shmem
- one offline issue:
- can miss events due to deadtime but Matt provides a counter so experimenters can watch for when this happens (see above code example)
Overview
Content Tools