Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

In the previous parallelization examples, all the cores have been analyzing data identically.  Another useful but somewhat more complex pattern is to have MPI clients intermittently sending updates to one master process that manages data visualization and storage.  This pattern is useful for updating plots during a run (instead of only at the end of a run) where typically used only when running in real-time from shared memory where it can be difficult to make sure that all nodes call Gather or Reduce the same number of times.  For real-time disk-based "smd" analysis Gather/Reduce can be used.

The idea here is to have two different pieces of code (client/master) that exchange data via MPI send/recv calls (used in such a way that clients can send to the master when desired).  To see this example, copy the following four files from /reg/g/psdm/tutorials/examplePython/: client.py, master.py, mpidata.py, mpi_driver.py (parses arguments and decides to run master/client, as appropriate).

...