Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

https://github.com/slac-lcls/smalldata_tools/blob/master/examples/SmallDataProducer_userData.py
Each of the supported feature extraction/data reduction mechanism is described in their own subpages.
The common structure of adding user data is the addition of a "DetObject" for each detector you would like to extract information from to the producer python file. The relevant lines look like this: 
Code Block
have_cs140_0     azIntParams = getAzIntParams(run)
    ROIs = getROIs(run)                                                               
    detnames=['epix10k135', 'epix10k2M']
    for detname in detnames:
        havedet = checkDet(ds.env, 'cs140_0')
if have_cs140_0:
    cs140_0 = DetObject('cs140_0' (), detname)
        if havedet:
            common_mode=84
            if detname=='epix10k135': common_mode=80
            det = DetObject(detname ,ds.env(), int int(run), common_mode=1=common_mode)
            #check for ROIs:                                                                                 
            if detname in ROIs:
                for iROI,ROI in enumerate(ROIs[detname]):
                    det.addFunc(ROIFunc(ROI=ROI, name='vonHamosROI_%d'%iROI)
    cs140_0.<feature extraction function>                                  
    dets.append(cs140_0)
)

            if detname in azIntParams:
                azint_params = azIntParams[detname]
                if 'center' in azIntParams[detname]:
                    try:
                        azav = azimuthalBinning(center=azint_params['center'],
                        dis_to_sam=azint_params['dis_to_sam'], phiBins=11,
                        Pplane=0, eBeam=azint_params['eBeam'],qbin=0.015)
                        det.addFunc(azav)
                    except:
                        print('could not define azimuthal integrating for %s'%detname)
                        pass

            det.storeSum(sumAlgo='calib')
            dets.append(det)

    return dets
'epix10k135'cs140_0' is a name given to the detector in the DAQ (called alias). You can get a list of detectors in your data by using the psana command "detnames exp=<expname>:run=<run#>" in a terminal.

...

  • CsPad, cs140k: of all tiles of detector will exhibit a zero-photon peak, use common_mode=1, otherwise we recommend no correction (common_mode=0). It is also possible to request the use of the unbounded pixels by using common_mode=5
  • Opal, Zyla: no common-mode correction. Should a pedestal of the correct shape have been produced, then it will be subtracted. common_mode=-1 will return the raw data.
  • Jungfrau: no common_mode correction is the default we now suggest to use common_mode=7
  • Epix: we are using a method called "common_mode=46" which removed pixels with a lot of signal (10x noise) and the neighbors. Then, the common_mode is calculated for reach row & column and then subtracted. This is similar to method 7 and different will be tested hopefully soon.

The common mode corrections which are wrapped by DetObject are described on this page: Common mode correction algorithms.

When a DetObject has been declared, information used to extract the data will also be stored in the hdf5 file, among other things we store:

  • pedestal (measured in a dark run)
  • noise (measured in a dark run)
  • gain (if applicable)
  • geometry arrays (x/y/z positions for each pixel)

As mentioned above, to actually have event based data for your detector in the hdf5 file, you should add reduction/feature extraction methods to your detector. Several of them have been setup to allow for easy addition without any need to write own code. This and the tools to help you set the different algorithms up are described in the child pages listed at the bottom of this page. While it is possible to save the full image (using the ROI mechanism), this will result in big hdf5 files, possibly posing problems with memory management when these files are analyzed later. Full images are ideally only stood using the "cube/binned data" mechanism.

Children Display

If you have needs that are not met by this, you can add your own code in the event loop and use the <det>.evt.dat array that has been created as input. You can then add your results as python dict to smldata, which will save it to the hdf5 file. This also allows to combine information from two detectors for your feature extraction. Unless you need to deal with big data (full images,...) or have very computationally expensive algorithms, I would recommend store simpler data in the first level hdf5 file and run the second level processing outside as described in 4. SmallData Analysis to Cube Production