Notes from the April 23, 2009, Meeting (Robert Johnson)


  1. Simple clusters, extend across tower boundaries (Tracy)
  2. Analysis of the clusters:
    1. centroid (don't worry about tower gaps direction)
    2. upwards vs downwards
    3. type (MIP vs compact vs extended). This is intended to use for sorting, to use the most photon-like one for finding the first track.


  1. Find ghost tracks:
    1. Loop over ordered clusters and find tracks a la the existing pattern recognition, including the blind (not cluster seeded) searches.
    2. Or maybe don't use the cluster seeding but instead seed with ghost hits:
      1. Lone hits in layers with TOT=255
      2. Hits that form 3-in-a-row xy pairs in a tower with no trigger
      3. Early hits flagged by diagnostic info, for data in which that is enabled
    3. Extend search: 2-D tracks, tracks with lots of gaps, etc.
  2. Flag all hits on ghost tracks as being ghost hits
  3. Re-do pattern recognition (cluster seeded, then blind) avoiding hits flagged as ghosts
  4. On the good tracks of interest, perhaps go back and add hits that fit well on the track, are not ghost hits per 1b above, but were flagged in 2 above as being associated with a ghost track.
  5. Build the data structures cross referencing all the tracks and hits, including the ghosts

TKR/CAL and TKR/ACD algorithms. These were not discussed much, except to note that Eric Charles should be engaged in the latter. He already has new code for associating tracks with the ACD. There was also a short inconclusive discussion about what to do with clusters, especially single-crystal clusters, in the case that several clusters belong to the same track. How should we go about merging them? Probably this should be done after associating them with tracks?


Inconclusive discussion. It is already too big (~600 columns) and includes a lot of unused rubbish. It serves 2 purposes: scratch-pad for Bill's event level analysis, which needs to run off of a flat table ntuple and a collaboration data summary intermediate between recon and FT1/2. Can we put new variables needed only for this task in groups that can be eliminated later?
Should we spend the time to clean up the existing table?


  1. Bill will draw up a data flow of the present tracking code
  2. Tracy can start working on the clusters in the CAL
  3. Leon will give Bill a list of MC variables that have to be faked in the data to make the classification tree work
  • No labels