Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Note that this shows data that has been decompressed by the detector interface.  The following is a snapshot of the grafana performance plot showing the calibration and compression times (1.94 ms, 3.55 ms) seen for a 5 kHz run.  The green trace showing 5.52 ms is the amount of time that is spent by the python script for each event from the C++ code's perspective.

The following 2 screen shots show a run with the EpixHrEmu DRP running at 5 kHz with 26 workers.  This is was taken in a marginally different situation from the above plot, which was with 60 workers.  With a processing time of 5.5 ms and a trigger rate of 5 kHz, one might expect the DRP to keep up (no deadtime) with 5.5 * 5 = 27.5 workers.  When dialing down the number of workers, I found that the system will run without deadtime even at 26 workers, but other grafana plots show the DRP struggling to keep buffers available.  Rate plots become more noisy.  But on average, the DRP keeps up.  The first screen shot below is a display from htop.  It shows that there are insufficient unloaded cores left to run a second DRP instance.  Indeed, when tried, such a system produced high deadtime.  Perhaps if one DRP were able to handle 2 panels (2 lanes) of data (currently this crashes), it might fit into one 64-core node, but it would be tight.  We decided against going down this path because some processing power headroom is needed for things like file writing.

Image AddedImage Added