Blog from May, 2010

Reason for change

The new version (AlarmsCfg-05-26-01, as opposed to AlarmsCfg-05-26-00) features a new limit for the quantities CalXAdcPedRMS_LEX8 and CalXAdcPedRMS_LEX1 for cal channel 1685.

Test Procedure

We have processed monitoring products from real on-orbit data (LPA) locally with this version of AlarmsCfg.

Rollback procedure

The package can be rolled back to the previous version by flipping a soft link. Also note that the package is completely independent from any other package running in the pipeline and will not cause a version change of L1Proc.

CCB Jira

SSC-254@JIRA

Details (release notes for dataMonitoring/AlarmCfg for AlarmsCfg-05-26-01)

  • AlarmsCfg-05-26-01 29-May-2010 pesce Raised the limit for CalXAdcPedRMS_LEX8 and CalXAdcPedRMS_LEX1 for cal channel 1685.
    • Raised the limit value for the quantities CalXAdcPedRMS_LEX8 (from 10 to 30 for warning and from 15 to 30 for error) and
      CalXAdcPedRMS_LEX1 (1.2 to 3.6 for warning and from 1.8 to 3.6 for error) for cal channel 1685.

CAL Day

  • Longitudinal position measurement
    • current simulation is inadequate and needs work; discrepancies in CTBCORE probably associated to this (extend of excess unclear but probably large), historical issue with CalTransRms from BT data associated to that
    • simulation requires implementation of this effect; discussion about implementation, i) adding finer McIntegratingHits closer to xtal edge ii) holding info from every single particle simulated in the shower (computationally unacceptable?) iii) implement average behaviour from data asimmetry (NB each xtal has a specific asimmetry curve that requires calibration)
      • Path forward:  modify McIntegratingHit to include effect of direct light and Brewster angle effect at Xtal ends.   Modify Xtal response to incorporate new information available from McIntegratingHit.  Observe effects using modified CalValsTool that includes Xtal end-to-end light asymetries.   Presently being implemented by Tracy.
    • handling longitudinal position in data
      • Bill proposes resolving ambiguity by looking at adjacent layers up/below
      • Philippe says this can be done when reconstructing shower profiles
      • Luca B propose empirical method to re-align xtals too far away from shower main axis for EM showers - what happens with MIPs?
  • Energy scale and BT analysis
    • historical discrepancy from BT: implications for the LAT not unanimously accepted, existing reservations on different calibrations and environments between data and MC
    • Grove prefers measurement from CRE cutoff wrt BT as it comes from the LAT and is not affected by rate/T effects
    • agreement to use BT data to study edge effects with EM showers
  • Event Level Analysis:
    • Not just background rejection, but more Event Level analysis, includes energy and PSF analysis
    • already now many people involved, possibly more, different tools available (IM, TMine, standalone ROOT+TMVA?), requires guidelines
    • maintain worksheet structure, possibly cvs repository
    • structure worksheet with max traceability, e.g. track every split and parallel evt paths via status bits, implement bit patterns for prefilters in block of analysis (CPF, TKR, CAL ?.) coherently
    • ongoing developments in TMine (I/O asci files, scriptability ?)
  • Energy dependent classes
    • good for systematics (e.g. same spectra in overlapping regions?)
    • avoid explicit energy cuts,m but ok to optimize selections in specific regions (we know we have different data/mc agreements vs energy, we already use classes for investigations specific to energy ranges, i.e. GRB classes, EGB class)
    • careful in populating public classes, probably HE-LE x ALL-photon classes is too big a matrix
  • Restructure merit
    • a lot of work, not ideal to wait until recon is done to start thinking about it
    • good to have people identify during recon development the needed quantities; this requires a mechanism to add quantities into varaibles w/o necessarily recompiling the whole GR (Tracy will set up something)
  • Schedule and Interactions with SWG:
    • senior review in 2 years should be kept in mind; can be used as a natural deadline
    • Pass7 schedule can impact Pass8, e.g. people might wait for Pass8 and not use Pass7 at all
    • need to work seriously on Pass7 systematics to quantify them and make sure people realize we understand Pass7 well, so they use it
    • need to explain what Pass8 is and what implications it will have on analaysis so SWG can develop expectations for enhanced performances that will allow specific analysis

NB: these are unedited notes

Sparse notes from morning session

  • Ghosts
    • look into CAL trigger info, probably only for high energy ghosts: it may be possible to look for a missing CAL-HI primitive
    • slide 14: yellow layers are inferred ghosts, i.e. layers that did trigger, while the other blue layers did not trigger (but had a request either in the TEM diagonistics or from a 3-in-a-row)
    • slide 16, comment from Bill: might be a residual activation from exiting the SAA, check with Neil Johnsono about average energy expected from such events; Eric/Luca: we have indications of such activations from several monitoring plots, found specific quantitites that display such behaviour
    • on ghosts from CAL (Steve): can we get ghosts in CAL by looking at the inconsistency between the readout energy and the selected readout range (e.g. an out of time ghost whose signal decays in time and latched a higher energy readout range)
  • CR Tracking
    • essentially available in GR; Leon: need to think about restructuring, we now have different type of tracks and need to decide which to present to subsequent steps; Bill: need to work on the ED too; Eric: need to find a way not to step on each other, like a key system in the TDS; Tracy: easy to do, need to change the track list from a vector to a map; Eric: need to change ACDValTools accordingly, it does not currently know about this. Leon: implementation issue, need to talk over this; Philippe: do we have a hint of how oftern this finds a new track? Bill: not really; Leon: it should find the same ghosts tracks that we find; Leon: there is a sense that adding all this will slow down recon, but it may not be so, since, e.g. tracks that are found by the CR tracking may not have to be reconstructed by the gamma track finder
  • Tree-based Track fitting
    • processing time for combo pat rec (current) is artificial and determined by choices in recon that limit the combinatorics
    • slide 25: currently no mechanism to merge two trees, so in the case of a photon converting high up the Tree-based track fitting misses the direction; Luca B: any bias from the method with the angle? Bill: namenly which method has a larger fish-eye effect?
      Bill: I b=think ultimately this approach will impact much the bkg rejection, since it gives a full picture of the shower develpoment, so I don't expect too much improvement on the PSF, but we can think to many metrics of the tree we can move to CTs to train them; incidentally, our bkg rejection sufffer at high energy, which is where this tool works best
  • PSF/Eric: I would say this is most you can do with Merit; Luca B: I scanned some 100s events, high energy photons > 5-10 GeV, and compared residuals of the TKR direciton vs the CAL direction, and found a better picture wrt my expectations, need to check; you do not need to scan many events where 1/2 xtal in the cal screw direction and centroid; Eric: this is not an effect of isolated xtals that pull the centroid, we are talking about longitudinal position being off and impacting the centroid measurement
    Eric: this plot tells me it is a matter of longitudinal position, things are fine in the middle of a tower, when you move closer to twr boundaries data and MC disagree more
    Toby: we should start always specifying what happens for Front and Back PSF separately
  • CTBCORE: this is mostly related to Pass7, but I want to discuss it here since we will have to face similar validations for Pass8.

Notes from afternoon Tagup

  • discussed more specific developments this morning, will need to think about knitting all these together
  • Tracy managed to compile Geant4 v9.3 in GR!
  • BT paper: Luca and Philippe discussed plans for the BT paper; it would be worth working on the results on energy scale from Melissa and on issues related to longitudinal CAL position measurement (aka CTBCORE issue) using BT data and try to add them to the paper; we think this would make the BT paper more interesting as opposed to just compiling a summary of quite old results and state conclusions unrelated to the LAT calibration
  • Leon: started scanning a skim of periodic trigger events with some people (Philippe, Luca B, Johan?) and found them to belong to two piles: 1 has an energy release consistent with MIPs, which is surprising, the other has completely uncorrelated tracks and energy deposits in the CAL - investigations are ongoing
  • several people installed and started workin on TMine (Steve, Luca L ?..) , and initiated discussions with developers (Eric, Alex)
  • discussion about CAL longitudinal position calibration: Sasha reports it is currently not applied as it requires a special handling a layer7 (question), since it has low stat; Eric and Sasha report that there we are currently using the same identical calibration for MC and recon, which implies that systematics from the real CAL are not in the MC and its response is bound to be better

We had excellent talks in the morning that people could look at to get an update.
You can find a complete overview from Bill and specific updates on some topics (more in the following days).
AI are identified action items.

Here are some more discussion items we went through in the working sessions:

Moving to Geant4

  • worth trying but not at the expense of slowing down other development areas
  • besides benefit from all areas, particularyl interesting are high energy routines (from LHC and future ones), hadronic physics lists (see slides 15,22,46 from this talk at LLR to learn about gray areas between existing hadronic physics lists we use and slide 50 for the consequences on energy measurement)
  • BUT it's a lot of work, here is the detailed list of things to check from Francesco:
    1. Decide which are the current implementations we need to mantain (at least to test the new G4 version)
    2. Compile new G4 source at SLAC (opt and no-opt).
    3. Revise the current code of G4Generator, G4HadronSim and GlastMS (maybe also G4Propagator?)
    4. Create new BTR and new GR with the new G4 code
    5. Define the new data sets for crucial tests before switching to the new G4.
    (at least from BeamTest and Flight Data (eg. Vela, the earth albedo, 1 week of full sim (eg. ~ DC2?)
  • AI: Tracy will start with the mechanics and evaluate if worth cotinuing; possibly Francesco could help starting from July on

Validation datasets

  • start gathering dataset from Eric's list and his Paris CM talk for possibe uses of them- specifically discussed
    • Vela: need a more detailed Vela-like Gleam source wrt tweaking AG; AI: will ask Tyrel to provide good spectral model and comment on required frequency for ephemeris updates; limited to both high energy (cutoff) and low energy (~200MeV) for bkg issues
    • Crab/Geminga: not much gain wrt Vela
    • AI request Earth limb pointed observation (1-2 orbits max): need to support request with existing analysis of Earth limb data; Earth limb is optimal as it is short, quick to reprocess, and has no dependence on external choices (like phase cut, annulus cut for "aperture photometry"); other "cusomters" for new Earth limb data would be Walrit, Rolf and Stefan for science studies
    • Alignment dataset: AI: move existing AGN separate datasets (~260 in Toby's set) into a single file containing photons within a given threshold from the actual position
    • priorities: Vela and Earth limb should come first in reprocessing, then Mkn421
    • prescaling events with energy: consider something more realistic beyond the existing scheme of smaller prescales for different energy ranges, like an energy-dependent function (prescales are there to keep file sizes reasonable)
    • processing time would be ~1day for ~1M evts (as per current table); disk space would not be an issue; AI: need to sort out a mechanism to trace back eventIDs that are skimmed at the merit level back into digis and recon (critical to save space); probably need to keep full MC info for a bunch of files

Reason for change

The new version (AlarmsCfg-05-26-00, as opposed to AlarmsCfg-05-25-00) features a few limit changes on the Tick20MHzDeviation (FastMon and Digi) and Digi_Trend_Mean_Tick20MHzDeviation.

Test Procedure

We have processed monitoring products from real on-orbit data (LPA) locally with this version of AlarmsCfg.

Rollback procedure

The package can be rolled back to the previous version by flipping a soft link. Also note that the package is completely independent from any other package running in the pipeline and will not cause a version change of L1Proc.

CCB Jira

SSC-253@JIRA

Details (release notes for dataMonitoring/AlarmCfg for AlarmsCfg-05-26-00)

  • AlarmsCfg-05-26-00 23-May-2010 pesce Some changes on the limits for the Tick20MHzDeviation (FastMon and Digi) and Digi_Trend_Mean_Tick20MHzDeviation
    • Warning limit for Digi and FastMon Tick20MHzDeviation_TH1 and DigiTrend Mean_Tick20MHzDeviation changed from -192 to -196.
  • AlarmsCfg-05-25-00 23-Apr-2010 pesce Some changes on the limits for Digi and FastMon Tick20MHzDeviation_TH1 and DigiTrend Mean_Tick20MHzDeviation.
    • Returned to old alarm limits values for Digi and FastMon Tick20MHzDeviation_TH1 and DigiTrend Mean_Tick20MHzDeviation quantities (i.e from -195.0 to -192.0).
  • AlarmsCfg-05-24-06 19-Apr-2010 pesce Some changes to the limits on Digi and FastMon Tick20MHzDeviation_TH1 and DigiTrend Mean_Tick20MHzDeviation
    • Warning rates for Digi and FastMon Tick20MHzDeviation_TH1 and DigiTrend Mean_Tick20MHzDeviation changed from -192.0 to -195.0.
  • AlarmsCfg-05-24-05 19-Apr-2010 pesce Some changes to the limits on the normalized rates
    • Warning limit for merit trend OutF_NormRateSourceEvts from 1.75 to 1.95, OutF_NormRateDiffuseEvts from 1.85 to 1.95 and OutF_NormRateTransientEvts from 1.55 to 1.55
    • Relevant Jira: GDQMQ-341

Reason for change

The new version (AlarmsCfg-05-25-00, as opposed to AlarmsCfg-05-24-04) features a few limit changes on the merit trend normalized rates.

Test Procedure

We have processed monitoring products from real on-orbit data (LPA) locally with this version of AlarmsCfg.

Rollback procedure

The package can be rolled back to the previous version by flipping a soft link. Also note that the package is completely independent from any other package running in the pipeline and will not cause a version change of L1Proc.

CCB Jira

SSC-252@JIRA

Details (release notes for dataMonitoring/AlarmCfg for AlarmsCfg-05-25-00)

  • AlarmsCfg-05-25-00 23-Apr-2010 pesce Some changes on the limits for Digi and FastMon Tick20MHzDeviation_TH1 and DigiTrend Mean_Tick20MHzDeviation.
    • Returned to old alarm limits values for Digi and FastMon Tick20MHzDeviation_TH1 and DigiTrend Mean_Tick20MHzDeviation quantities (i.e from -195.0 to -192.0).
  • AlarmsCfg-05-24-06 19-Apr-2010 pesce Some changes to the limits on Digi and FastMon Tick20MHzDeviation_TH1 and DigiTrend Mean_Tick20MHzDeviation
    • Warning rates for Digi and FastMon Tick20MHzDeviation_TH1 and DigiTrend Mean_Tick20MHzDeviation changed from -192.0 to -195.0.
  • AlarmsCfg-05-24-05 19-Apr-2010 pesce Some changes to the limits on the normalized rates
    • Warning limit for merit trend OutF_NormRateSourceEvts from 1.75 to 1.95, OutF_NormRateDiffuseEvts from 1.85 to 1.95 and OutF_NormRateTransientEvts from 1.55 to 1.55
    • Relevant Jira: GDQMQ-341