Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

-d <dirname>: the smallData file is read from <dirname>. The cube files will also be written there. At this point, the cubes will always be written where the smallData is read from.
---old---
These files are typically named "CubeSetup_<somethingDescriptive>" and are passed along to a job submission script called "cubeRun" . cubeRun has a help function that explains the command line parameters, but they are very similar to littleDataRun, aside from the necessary "-c <CubeSetupFilename>". The following are some of the options that differ from littleDataRun. -m  takes the common mode parameter: 5 means using the unbonded pixels and 1 uses the zero peak. "1" works better, but fails if ASICs have a lot of signal. The unbounded pixels always work. If we want to threshold the pixels in high gain mode, I would suggest 2.5 rms or 25 ADU as typically working values to start with.

-s <size>: rebin image to size x size pixels

-m <common mode parmaeter>:  apply common mode

-t <thres in ADU>: hitfinder

-T <thres in Rms>: hitfinder

-R store raw cspad data (NOT image)

j <number of MPI jobs>
If you specify a number of jobs exceeding the number of bins significantly, a fake variable will be added for binning to spread the bin-based calculation to more cores/nodes. This intermediate variable will be taken out at the very end.The hdf5 file also stores the pedestal and rms values. If the data is stored is "raw" format, then the big CsPad will have the shape of 32x185x388 instead of 1692x1691. The same is true for the pedestal and the rms. We also store the x/y values for each pixel. 

Cube data format

By default, the data gets saved in an hdf5 file in /reg/d/psdm/<instrument>/<expname>/hdf5/smalldata

...

Multi-dimensional binning

You can add more dimensions to bin the data in by using <cube>.add_BinVar:

Code Block
 def add_BinVar(self, addBinVars):        
"""        
add extra dimensions to bin the data in        
parameters: addBinVars: dict or list                    
list: only 1 extra variable [varname, bin1, bin2, ....]                    
dict: {varname: bins}        
"""

Indices for the events in all dimensions will be created and turned into one big flat index after which xarray binning is used. Finally, the data is reshaped into the expected formIt is possible to do this.

Returning event-by-event data (from smallData)

...