Page History
...
Using the above indexing feature, it is possible to use MPI to have offline-psana analyze events in parallel (this is useful for many, but not all, algorithms) by having different cores access different events . This can work for offline analysis (on (up to "thousands" of cores) or online analysis . MPI can also work for online-psana from shared memory (an example script is here that was able to process 120Hz with 7MB/event on 3 machines (8 cores per machine)). Online-psana from FFB can only be parallelized using MPI up to the number of cores in a monitoring node). For technical reasons, it does not work for online-FFB analysis. DAQ "streams" (typically 6). This is some offline-psana sample code that sums a few images in a run in parallelparallel using the indexing feature:
Code Block | ||
---|---|---|
| ||
mport psana import numpy as np import sys sys.path.insert(1,'/reg/common/package/mpi4py/mpi4py-1.3.1/install/lib/python') from mpi4py import MPI comm = MPI.COMM_WORLD rank = comm.Get_rank() size = comm.Get_size() ds = psana.DataSource('exp=XCS/xcstut13:run=15:idx') src = psana.Source('DetInfo(XcsBeamline.0:Princeton.0)') maxEventsPerNode=2 for run in ds.runs(): times = run.times() mylength = len(times)/size if mylength>maxEventsPerNode: mylength=maxEventsPerNode # this line selects a subset of events, so each cpu-core ("rank") works on a separate set of events mytimes= times[rank*mylength:(rank+1)*mylength] for i in range(mylength): evt = run.event(mytimes[i]) if evt is None: print '*** event fetch failed' continue cam = evt.get(psana.Princeton.FrameV1,src) if cam is None: print '*** failed to get cam' continue if 'sum' in locals(): sum+=cam.data() else: sum=cam.data() id = evt.get(psana.EventId) print 'rank',rank,'analyzed event with fiducials',id.fiducials() print 'image:\n',cam.data() sumall = np.empty_like(sum) #sum the images across mpi cores comm.Reduce(sum,sumall) if rank==0: print 'sum is:\n',sumall |
...