In order to keep the filesizes small to avoid issue when analysis the smallData files, we try to extract the important information gleaned from areaDetectors at an event-by-event basis and only save these pieces of data.
Each area detector defined in the smalldata_tools has the following psana masks defined:
self.statusMask = self.det.mask(self.run, status=True)
: only accounts for the pixel status
self.mask = self.det.mask(self.run, unbond=True, unbondnbrs=True, status=True, edges=True, central=True)
: accounts for the geometry pixelsself.cmask = self.det.mask(self.run, unbond=True, unbondnbrs=True, status=True, edges=True, central=True,calib=True)
: generally the mask you want
Typical forms of userData
SmallDataProducer_userData.py has examples for the most standard requests. Parameters that are important for the production (ROI boundaries, centers for azimuthal projections,...) are stored in the "UserDataCfg" subdirectory.
azIntParams = getAzIntParams(run) ROIs = getROIs(run) detnames=['epix10k135', 'epix10k2M'] for detname in detnames: havedet = checkDet(ds.env(), detname) if havedet: common_mode=84 if detname=='epix10k135': common_mode=80 det = DetObject(detname ,ds.env(), int(run), common_mode=common_mode) #check for ROIs: if detname in ROIs: for iROI,ROI in enumerate(ROIs[detname]): det.addFunc(ROIFunc(ROI=ROI, name='ROI_%d'%iROI)) if detname in azIntParams: azint_params = azIntParams[detname] if 'center' in azIntParams[detname]: try: azav = azimuthalBinning(center=azint_params['center'], dis_to_sam=azint_params['dis_to_sam'], phiBins=11, Pplane=0, eBeam=azint_params['eBeam'],qbin=0.015) det.addFunc(azav) except: print('could not define azimuthal integrating for %s'%detname) pass det.storeSum(sumAlgo='calib') dets.append(det) return dets |
The common mode corrections which are wrapped by DetObject are described on this page: Common mode correction algorithms.
When a DetObject has been declared, information used to extract the data will also be stored in the hdf5 file, among other things we store:
As mentioned above, to actually have event based data for your detector in the hdf5 file, you should add reduction/feature extraction methods to your detector. Several of them have been setup to allow for easy addition without any need to write own code. This and the tools to help you set the different algorithms up are described in the child pages listed at the bottom of this page. While it is possible to save the full image (using the ROI mechanism), this will result in big hdf5 files, possibly posing problems with memory management when these files are analyzed later. Full images are ideally only stood using the "cube/binned data" mechanism.
If you have needs that are not met by this, you can add your own code in the event loop and use the <det>.evt.dat array that has been created as input. You can then add your results as python dict to smldata, which will save it to the hdf5 file. This also allows to combine information from two detectors for your feature extraction. Unless you need to deal with big data (full images,...) or have very computationally expensive algorithms, I would recommend store simpler data in the first level hdf5 file and run the second level processing outside as described in 4. SmallData Analysis to Cube Production