It's time to think a bit about the Dataserver for DC2. What are the operating principles so far?

  • we will generate an ntuple with the background events interleaved - changing event, run, time and position of those background events to match the current orbit and event time.
  • trees will be generated in the signal generation for signal events, and not propagated for the backgrounds. The tuple and tree files will not be in synch!
  • the interleave process will create a lookup table for the background events mapping new events to old (does it retain the all rewritten information, including position?)
  • the trees and tuples will have full MC information in them

What do we want the users to see?

  • events should look like data and be seamless. So they prune/peel the data with the usual look and feel and get back data with no MC info, and trees that have the new id/time/pointing info.

Possible implementation:

  • would have to fiddle the tuples to null out the MC info. ie a special manual run on am already concatenated/chained dataset
  • so, have user pruning done from Catalogue, using the fiddled tuple files as input?
  • peeling would have to have a an app in between to remap the requested run/event id's into the originals, and overwrite the originals. (what about pointing info in the trees?) MC files would not be an option to fetch.
  • for the astro server, we have to run the analysis cuts to get down to the photon list and load these into the astro server; not sure what we want to do for the "event list" which is supposed to be a subset of the entire downlink with the full tuple column definition. The so-called 10% solution, which gets rid of 90% of the background. We don't have these cuts defined yet.
  • do we have a special DC2 portal for this?
  • what do we do about serving pointing history? Livetime?

The timescale for this in practice is to be ready (and tested) for the DC2 kickoff meeting, March 1.

What else do we need to do and how better to do it?

Richard

  • No labels