To enable high (e.g. 25x more than single-threaded) throughput, psana2 has a complicated behind-the-scenes system to send chunks of events to many cores (e.g. 100) and reassemble numpy analysis products.  I'm using smalldata to do this.  None of the complexity should be apparent to the casual analyst adding features.  The np data is stored using smd calls and results (the way I have it set up) in two .h5 files.  The one with part0 in the name can be ignored.  With luck all one needs to do in /sdf/data/lcls/ds/rix/rixx1003721/results/scripts is to copy/paste something like

sbatch -p milano --nodes 10 --ntasks-per-node 10 --wrap="mpirun python -u -m mpi4py.run LinearityPlotsParallel.py -r 159"

(this launches LinearityPlotsParallel.py on 100 cores)

and then when the slurm-n.out file is done (can also be seen with squeue -u yourName)

python LinearityPlotsParallel.py -r 159 -f ../scan/LinearityPlotsParallel_c0_r159_n100.h5

The resulting plots can be seen in ../scan, ../lowFlux (for single photon clustering) or ../dark (for dark runs).

Feel free to improve the plots.  See analyze_h5 for the method that gets the data arrays out of the .h5 file.

Currently supported scripts are

EventScanParallelSlice.py  LinearityPlotsParallelSlice.py  TimeScanParallelSlice.py SimpleClustersParalleSlicel.py

EventScan does the pedestal (or whatever) pixel vs event (or timestamp - see "label") display plot.

Linearity plots raw M/H then L plots raw for single pixels and calib for regions.

TimeScan plots the weighting function for regions and single pixels.  If one wants to hide the flux calculation, set fakeFlux in the code to False.  Otherwise set a minimum (>0) flux via e.g.

sbatch -p milano --nodes 10 --ntasks-per-node 10 --wrap="mpirun python -u -m mpi4py.run TimeScanParallelSlice.py -r 179 -t .1"

SimpleClustersParallelSlice calls my 3x3 clustering routine and saves energy, row, col, nPixel, isSquare(any 2x2 set of above-threshold pixels in cluster except [[1,0], [0,1]]).  I had to hard-code the number of allowed pixels but this is O(50) for two 48x48 regions at max 10% occupancy anyway.  One can make the h5 file and call the h5 analysis by e.g.

sbatch -p milano --nodes 10 --ntasks-per-node 10 --wrap="mpirun python -u -m mpi4py.run SimpleClustersParalleSlice.py -r 224 -t 100"

where the threshold is the maximum flux considered and then

python SimpleClustersParallelSlice.py -r 224 -f ../lowFlux/SimpleClusters_c0_r224_n100.h5

which fits and plots all the slice pixels and plots the gain distribution (also the pre-clustering energy spectrum).  This fitting takes a while (several minutes) at the moment on old low-stats data, might be faster with better distributions.  Note that the analysis output lives in the results/lowFlux directiory.

python MapCompEnOn.py -f ../scan/TimeScanParallel_c0_r216_n1.h5

which computes the Onset, the Onset of CompEnOn and the length of a transfer function for each pixel. The first figure displays the map of these 3 parameters  and this map is clickable in order to see each pixel's transfer function in a separate window. Close the figures to stop the program. The picture of the maps is saved here: /sdf/data/lcls/ds/rix/rixx1003721/results/scan/ (.png format)






  • No labels