want:
- give the drp a psana-python script
- drive that psana-python script by calling psana_set_dgram(Dgram*) (would replace the file reading)
ds = DataSource(dgramDsource=True)
myrun = next(ds.runs())
for event in myrun.events():
pass
xtcreader C++ calls: psana_set_dgram(Dgram*) (goes into dgram.cc?)
idea:
- multiprocess drp (not multithreaded) (Valerio), talk with Elliott (like legion)
o have standalone multi-threaded or multi-process C++ code run python scripts
- (Mona) use xtcreader to represent one of those processes for development
modified to call a python script, perhaps like:
https://github.com/slac-lcls/lcls2/blob/c4fa38db1799b5c2acf6e4908daf50403c1bf616/psdaq/drp/BEBDetector.cc#L80
- xtcreader C++ calls psana-python script
o it make a new DataSource (DgramDataSource?)
o as drp C++ receives a new dgram, it passes it to the DgramDataSource (instead of reading from file)
ds = DataSource(dgramDsource=True)
C++ call: psana_set_dgram(Dgram*) (would replace the file reading)
the above Dgram is passed (somehow) to psana/src/dgram.cc (does
file-reading) who creates the python "Dgram"
maybe use the "buffer"/view interface?
two options:
(1) shmem we copy every dgram so that the python reference counting works in a standard way. could do the same thing here. decouples the psana-memory-management from drp-memory-management
(2) we don't copy the dgram, more efficient but we can't delete the dgram in a normal way, and we can't save information from old events
my inclination is to do (1)
A potential issue: (lower priority) this method of running psana (and shmem) do not have scalable ways of loading the calibration constants: each core will access the database. Ideally we would fix.