Page History
...
Code Block | ||
---|---|---|
| ||
ls /reg/d/psdm/XCS/xcs84213/xtc/index/ |
There should be 1 index file per data file in your experiment. If they do not exist, they can be created using an "xtcindex" command (send email to "pcds-help@slac.stanford.edu" to have this done).
...
Using the above indexing feature, it is possible to use MPI to have psana analyze events in parallel (this is useful for many, but not all, algorithms) by having different cores access different events. In principle this This can work for offline analysis (on "thousands" of cores. This requires the python package mpi4py (part of the psana release) and a compatible version of openmpi (not currently in the psana release)) or online analysis from shared memory (up to the number of cores in a monitoring node). For technical reasons, it does not work for online-FFB analysis. This is some sample code that sums a few images in a run in parallel:
Code Block | ||
---|---|---|
| ||
importmport psana import numpy as np import sys sys.path.insert(1,'/reg/common/package/mpi4py/mpi4py-1.3.1/install/lib/python') from mpi4py import MPI comm = MPI.COMM_WORLD rank = comm.Get_rank() size = comm.Get_size() ds = psana.DataSource('exp=cxib7913XCS/xcstut13:run=3415:idx') src = psana.Source('DetInfo(CxiDg2XcsBeamline.0:Tm6740Princeton.0)') maxEventsPerNode=2 for run in ds.runs(): times = run.times() mylength = len(times)/size if mylength>maxEventsPerNode: mylength=maxEventsPerNode mytimes= times[rank*mylength:(rank+1)*mylength] for i in range(mylength): evt = run.event(mytimes[i]) if evt is None: print '*** event fetch failed' continue pulnixcam = evt.get(psana.CameraPrinceton.FrameV1,src) if pulnixcam is None: print '*** failed to get pulnixcam' continue if 'sum' in locals(): sum+=pulnixcam.data16data() else: sum=pulnixcam.data16data() id = evt.get(psana.EventId) print 'rank',rank,'analyzed event with fiducials',id.fiducials() print 'image:\n',pulnixcam.data16data() sumall = np.empty_like(sum) #sum the images across mpi cores comm.Reduce(sum,sumall) if rank==0: print 'sum is:\n',sumall |
This can be run interactively parallelizing over 2 cores with these commands on a psana node:
Code Block |
---|
setenv PATH /reg/common/package/openmpi/openmpi-1.8/install/bin/:${PATH} mpirun -n 2 python mpi.py |
...