Original Declaration (by Richard Dubois)

Here are my notes on discussions I had with Berrie, Benoit and David (Chamont) this week at LLR (there are at least 3 David's here in France on GLAST) on the topic of how CalRecon needs to evolve to meet our future needs. These are notably set by our desires for iterative reconstruction and multiple energy correction algorithms.

First off, CalRecon performs two standalone (at least for now) operations:

  • Convert raw data from ADC etc to engineering units - a revised version should use the proposed CalResponseSvc.
  • Do clustering - at the moment just the one cluster, but in future more (note that Tracy has checked in the updated Fuzzy cluster tool and his simple cluster tool for further examination). I don't think there is yet any dependence envisaged of clustering on outside information from the TKR.

These need only be done once per event.

Here are steps which could be iterated and/or use TKR information as input:

  • Various energy correction schemes (we have 3 so far; last layer (LL), profile fitting and CalValsTool)
  • We might want some overall CAL event summary

This is not so different from what we have now (in which we do two iterations with TKR - in the current recon, TKR wants an energy estimate and barycenter from the CAL; CAL wants an angle (and maybe vertex position) from TKR. So we run CAL without TKR first; TKR uses the cluster; then CAL is called again with available tracks; TKR is called again with (presumably) better energy information), in fact, so what's the problem ?

  • The output from the first iteration is completely overwritten by the second. There is no record of how we got to the 2nd.
  • Each energy correction scheme is allocated one number (the energy) in the cluster class. That's it. Nothing for quality, applicability nor personalized info about the result.
  • While not necessary, it might be nice to allow more than one cluster algorithm to run and have data structures that allow parallel trees.
  • We should be making use of relational tables to give us more bookkeeping on how high level constructs are composed. For example, it would be nice to be able to find out which MC hits ended up in which clusters. And so on. These are powerful tools for understanding the performance of algorithms. See what Tracy has added in the TKR for examples.

I believe that GCRCalib does not actually belong in Recon explicitly. There should be a Gaudi algorithm (needs access to the propagator) to select events and create a specialty output file that then can be processed separately (as we do in calibGenCal for other calibrations). Since the output of GCRcalib is the gain factors for the lower-gain readouts, Recon just makes use of them.

We may well want CalRecon to accept "suggestions" on what the event might be. For example, on the ground we may be pretty sure we have a muon source. But we may also want to allow CalRecon to perform parallel event interpretations. One would not use LL corrections for muons or heavy ions, for example. So perhaps CalRecon could always do three (so far) interpretations - as gamma, muon or heavy ion. And have the data structures to accommodate. Then we could allow CalRecon to be directed to a particular interpretation or not (further allowing the idea of an event shape preprocessor, or of an event summary step which could call Cal/TkrRecon one last time with the final interpretation of the event.

Richard Dubois.
16 June 2004.

Use Cases

Gamma Event

Background rejection

Ground PI 0

General Requirements

Job options

We want the job options file to be as simple and short as possible, especially when using default values.

Flexibility

We want the user be able to implement his own clustering or correction algorithms, and plug them in a Gleam job.

Recon coherence

We want all the recon packages to follow a common policy.

  • No labels