Page History
Projects
- prototype point-and-click interface
- (2) glue in psana/roi/process spawning (getting the detector list from DetNames())
- (1) GraphManager (graph synchronization)
- deleting graph pieces (reference counting)
- (3) drag-and-drop box designer
- epics for output boxes
Done
- set ROI, get json
- startup new processes for windows
Tasks
- monitoring EB design iteration
- gasnet
- python analysis-chain mockup
...
- Use separate processes for 1 event in a reddis-like decentralized environment?
- support an arbitrary loopless graph of analysis
- support for reading/writing EPICS variables in real-time (having analysis/daq/controls be in the same python environment is necessary for this). would help with feedback
- maybe could replace hutch python to control DAQ with this? (scans)
- 1 event 1 core
- scientists only write algorithm boxes
- two levels: "box" level, and then the "scheduler"
- how do we pass meta-data through the tree? (e.g. for two ROIs in a row, the second one needs to know about the first one)
- policy for type-checking and shape-checking?
- each analysis box could have optional "output box" for distribution of results
Multiprocessing
- display makes requests to clients? both algorithm/parameter changes and gather messages
- different rates for different gather requests?
- how do you guarantee consistency with changing calculations e.g. ROI?
...
- keep them separate from the graphs
- many output boxes (plot, save to disk, etc.)
- three big ideas: management, graph, output
- could be epics
Reconfiguration of Graph
- reconfigure everything downstream of the box that gets changed
- boxes like a "deque" box will clear their deque's.
- boxes could have a variety of more complex reconfigures that have fewer side-effects (e.g. not clear the deques in some circumstances)
...
- Feel like it should be split out of the DAQ timestamped data
- recorded separately to hdf5 and handled by offline event builder
- for shmem another process would put epics in some separate buffer (like "epicsStore") for use by AMI
- could be used to send ami data elsewhere (from output boxes)
Data Interface
- boxes accept "datagrams" as input/output
- datagrams contain: evt ids (plural, must be >0), ami-graph-config number, metadata dictionary which has serializable python objects, data dictionary of numpy arrays
...
- Try to eliminate transitions in the DAQ
- instead use "base-configuration" for a run, and time-ranges-of-validity for configuration changes (scans)
- online and offline event-builders will event-build the configuration changes accordingly
Output Boxes:
- could be EPICS? (and others)
- slow data (e.g. slowly changing background) can be posted back to earlier parts in the graph in a slow "database" fashion
Posting Data to Other Calculations
- All data can be posted
- Slow data (e.g. slowly changing background) can be posted back to earlier parts of the graph
Names
- Each box output gets a "name" that users can use to access it for further computation
- bigger boxes encompassing more steps mean fewer names
pyDM
- could be useful for assembling AMI windows
- AMI is made up of relatively few "building block" windows
- could also be useful for standard-config
To Discuss
- changing data as run goes on (e.g. DRP pre scaled uncompressed events)
Projects
- scaling
- ami "calculator"
- pyqtgraph graphical operations:
- "box" gui
- "point and click" gui (how much goes in here)
Packages to Consider
dasc
pyqtgraph
mpi
zmq
karabo:
send mail to k. weger
...
Overview
Content Tools