You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Next »

Overview

Before an EpixHR camera becomes available in hardware form, an emulator in SRCF is to be set up to develop and test the operation of all the components involved in the DAQ for this device.  This set-up would consist of:

  • 10 or 20 DRP nodes configured with the DrpTDet KCU firmware
    • The goal is to have each DRP handle 1 or 2 tiles/panels of the camera
  • The tdetsim sim_length parameter configured to produce a similar data volume as the real camera

  • Substituting real EpixHR2x2 data for the tdetsim data in the DRP's event() method
  • A DrpPython script to compress event data into 'FEX' contributions, while retaining a prescaled amount of the original 'raw' data

Method

For the initial step, the DRP code was modified to support an epixhremu detector.  This detector requires the TDetSim firmware in the KCU.  We plan for 1 or 2 camera tiles/panels to be supported on 1 or 2 lanes for the segment level DRP.  An EpixHRemu C++ class was created to dummy up the serial number and data for the detector.

Serial number

The serial number is of the form epixhremu_00cafe0000-0000000000-0000000000-0000000000-0000000000-0000000000-0000000000, where the 4 zeros after 'cafe' are replaced with the segment number, i.e. ...00cafe0000... through ...00cafe0013...

Data

To emulate a similar data bandwidth as expected from the real detector, the TDetSim sim_length parameter was set to 55292, corresponding to 221 kB of pixel data.  This matches the panel size of 144 by 192 by 4 ASICs times 2 bytes per pixel, 221184 bytes.

The data passed downstream is provided by an XTC file of existing data.  This file is specified on the DRP command line through an xtcfile keyword argument.  In case the file contains some junk events, an l1aOffset kwarg can be used to skip past the first N events in the file.  The substitute data is further indexed by the segment number, multiplied by the number of lanes in use, to avoid passing the same data for each panel.

The data files we currently (9/1/23) have available were recorded during the EpixHR2x2 testing.  This detector had only one panel sort-of working, which was handled by ASIC 0.  A file that had some somewhat varying data in it was rixx1003721-r0106-s010-c000.xtc2.  For the emulated data (which is either 4 or 8 panels, depending on the lane mask), each panel is given ASIC 0 data from a different event.  Thus all emulated events have  identical data.

Python detector interface

With the above data was recorded (in April 2023) and used to develop the detector interface.  From this we see events like the following in the data files.  Both runs were taken with one panel per segment.  The first is from run 277 (psana://exp=tstx00417,run=277,dir=/cds/data/drpsrcf/tst/tstx00417/xtc) and contains the segment 3 panel, and the second is from 276 (psana://exp=tstx00417,run=276,dir=/cds/data/drpsrcf/tst/tstx00417/xtc) containing all 20 segments.

DrpPython

In the next step, the new DrpPython functionality was used to compress the raw data with libpressio.  We first tried the SZ algorithm and found that its performance didn't scale well with rate.  SZ3 worked better and we saw compression times of ~3.5 ms at a trigger rate of 5 kHz.  Calibrating the data to ready it for the compressor took 1.8 ms.  60 workers (60 python processes) were used to distribute the load and achieve the rate.  The following listing shows the DrpPython script used:

epixHrEmu.py
from psana import DataSource
from psana.dgramedit import AlgDef, DetectorDef, DataType
import psana.psexp.TransitionId
import sys
import numpy as np
from libpressio import PressioCompressor
import json

# Define compressor configuration:
lpjson = {
    "compressor_id": "sz3", #the compression algo.
    "compressor_config": {
        #"sz:data_type"           : lp.pressio_uint16_dtype,
        #"sz:data_type"           : np.dtype('uint16'),
        ###"sz:error_bound_mode_str" : "abs",
        ###"sz:abs_err_bound"        : 10, # max error
        "sz3:abs_error_bound"     : 10, # max error
        "sz3:metric"              : "size",
        #"pressio:nthreads"        : 4
    },
}

ds = DataSource(drp=drp_info, monitor=True)
thread_num = drp_info.worker_num

cfgAlg = AlgDef("config", 0, 0, 1)
fexAlg = AlgDef("fex", 0, 0, 1)
detDef = DetectorDef(drp_info.det_name, drp_info.det_type, drp_info.det_id)
cfgDef = {
    "compressor_json" : (str,      1),
}
fexDef = {
    "fex"             : (np.uint8, 1), # Why not float32?
}
nodeId = None
namesId = None

cfg = ds.add_detector(detDef, cfgAlg, cfgDef, nodeId, namesId, drp_info.det_segment)
det = ds.add_detector(detDef, fexAlg, fexDef, nodeId, namesId, drp_info.det_segment)

cfg.config.compressor_json = json.dumps(lpjson)

ds.add_data(cfg.config)

# configure
compressor = PressioCompressor.from_config(lpjson)
#print(compressor.get_config())

for myrun in ds.runs():
    epixhr = myrun.Detector('epixhr_emu')
    for nevt,evt in enumerate(myrun.events()):
        cal = epixhr.raw.calib(evt)
        det.fex.fex = compressor.encode(cal)
        ds.add_data(det.fex)
        if nevt%1000!=0: ds.remove_data('epixhr_emu','raw')

The times were measured by adding prometheus metrics to the script (not shown here for clarity) and viewing the results with grafana.  This performance was measured using one DRP segment and one panel.  A typical live AMI data image from such a run (segment 0 on drp-srcf-cmp035) is:


Note that this shows data that has been decompressed by the detector interface.  The following is a snapshot of the grafana performance plot showing the calibration and compression times (1.94 ms, 3.55 ms) seen for a 5 kHz run.  The green trace showing 5.52 ms is the amount of time that is spent by the python script for each event from the C++ code's perspective.

  • No labels