Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 4.0

...

There are a fair amount of parallelism and concurrency in the task, so the description below will not always be in order.  Here's a flowchart for the processing of one run:
There are 2 levels of parallel processing:  Most work is done on chunks, which are created by upstreqam software (the halfpipe).  Reconstruction breaks chunks up into crumbs.  Each recon crumb job must copy the entire digi chunk file to scratch before processing its portion of the file. 

...

.evt chunk files made by the halfpipe (~25 per run @ 125 MB) are copied from AFS to scratch.  Each twice (digitization and fastmon).  All read in paralell.

...

Every file that is written to NFS is read out more-or-less immediately by the data catalog crawler.

31 GB out (NFS)

------------ - total data moved - ------------

27 GB in (AFS)
165 GB out (AFS)
31 GB in (NFS)
37 GB out (NFS)

...