Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

This approach (supported for data after October 2015) is simplest, but has the disadvantage that it is not guaranteed to be able to keep up with the data (but in most cases it does when using MPI as shown in the example here).  It requires using 3 extra keywords when constructing your DataSource:

Code Block
ds = DataSource('exp=xpptut15:run=54:smd:dir=/reg/d/ffb/xpp/xpptut15/xtc:live')
  • "smd" means use the DAQ small data, as used in many of the psana-python examples (in particular the MPI example here)
  • the "dir=" argument tells the analysis code to use a special set of disks reserved for the running experiment
  • the "live" argument tells the analysis code to wait for additional data in case the analysis "catches up" to the DAQ

This analysis can be submitted to the batch system as usual, but should use either psnehhiprioq/psfehhiprioq batch queue (only when your experiment is actively running) as described here It typically uses dedicated "fast feedback" (FFB) resources as described here: Fast Feedback System.

Real-Time Analysis Using "Shared Memory"

This approach is somewhat more complex, but is guaranteed to analyze only "recent" events coming from the DAQ network (no disks involved, so it's fast!).  It has the disadvantage that it is not guaranteed to analyze every event in a run.  This approach is best when you need guaranteed real-time feedback, e.g. for tuning experiment/accelerator settings based on real-time plots.  To use this mode, create a DataSource object this like:

...

There are other complications starting the code and gathering results from the MPI processes.  You must use the data gathering pattern here.  It is important not to use MPI collective operations like reduce/gather/bcast etc. during the event loop,  this can cause software hangs.  Contact the analysis group or your hutch Point Of Contact if you wish to use this mode.