Page History
...
Code Block |
---|
from psana import DataSource ds = DataSource(exp='rixdaq18',run=17) myrun = next(ds.runs()) motor1 = myrun.Detector('motor1') motor2 = myrun.Detector('motor2') step_value = myrun.Detector('step_value') step_docstring = myrun.Detector('step_docstring') for step in myrun.steps(): print(motor1(step),motor2(step),step_value(step),step_docstring(step)) for evt in step.events(): pass |
Running on Shared Memory
psana2 scripts can be run on shared memory. Look at the DAQ .cnf file to see what the name of the node is running the shared memory server. You can find the name of the shared memory (hutch name is typically used) either by looking on the .cnf file (the "-P" option to monReqServer executable) or doing a command like this:
Code Block |
---|
drp-neh-cmp003:~$ ls /dev/shm/
PdsMonitorSharedMemory_tmo
drp-neh-cmp003:~$ |
For this output, you would use "DataSource(shmem='tmo')".
When running with mpi there are some complexities propagating the environment to remote nodes: the way to address that is described in this link. The same parallelization model is used as for the production of the small hdf5 files described here. The typical pattern would be to use the small data callback to receive all the data in a dictionary gathered from all nodes, as shown in the example here.
Code Block |
---|
smd = ds.smalldata(batch_size=5, callbacks=[my_smalldata]) |
It is also necessary to have one core reserved to do the gathering, so have a line like this
Code Block |
---|
os.environ['PS_SRV_NODES']='1' |
Typically psmon is used for publishing results to realtime plots in the callback: Visualization Tools.
MPI Task Structure
To allow for scaling, many hdf5 files are written, one per "SRV" node. The total number of SRV nodes is defined by the environment variable PS_SRV_NODES (defaults to 0). These many hdf5 files are joined by psana into what appears to be one file using the hdf5 "virtual dataset" feature.
...