Thoughts about CAL data monitoring are collected here

Monitoring CAL quantities in the first LPA runs

The LISOC will work to keep a correct, public copy of the timeline in slacspace. The times and dates I've written here will probably not be right, so please go to slacspace to see the real timeline. You'll need your Windows password, and remember your username is preceded by "SLAC\" (e.g. I'm SLAC\grove). The URL is https://slacspace.slac.stanford.edu/sites/ISOC/MP/Lists/LEO%20%20Activation%20Timeline/Science%20Ops.aspx

You'll be able to do these tasks with a combination of monitoring plots from LISOC and s/w written by Zach and Sasha, described here.

A useful overview of the plots is at https://confluence.slac.stanford.edu/display/ISOC/Review+of+CAL+Data+monitoring

In basic form, here's the list of tasks.

0. For each task below, record the CAL AFEE temperatures

For each task and each dataset below, it's important to understand what the AFEE board temperatures are. There are reported in the LISOC data monitoring. See instructions in the next section. Compare to the AFEE temperatures during Observatory cold thermal balance, when the LAC calibration data were acquired.

*The CAL AFEE temperatures can be displayed on the Telemetry Trending -> Health and Safety -> Temperature -> Calorimeter and pick the LHKT#CALAF## value you need where # is the tower number (in Hex) and ## is 0 = X+, 1 = Y+,
2 = X-, ..... Beware: this page looked different last week so the way to access these variables might change too.

If you want to grab the temperatures with a command line go to the ISOC utilities and use the MnemRet.py script.

1. From the initial LPA runs, determine whether there are any serious problems with zero suppression or CAL triggering.

The useful datasets are
a. the runs we start via PROC requests on L+15. They'll be conSciOps_noCal, calibOps, and nomSciOps
b. the short 5-min runs of all LPA configs on L+16, including all of the fleCalib*, fheCalib*, and lacCalib* configs.
c. the longer calibOps and conSciOps_noCal runs on L+16.

The tools here are mostly LISOC monitoring plots.

From those runs, we need to know absolutely ASAP whether there are any problems with LAC, FLE, or FHE. We need that information at the MOC within hours after the data come out of the pipeline, if we're to have any chance to fix them before the FLE and FHE timing-in data are acquired. Sorry about that. Please communicate this information to the Shift Coordinator (Rob or Eduardo or ?) at the Mission Support Room at SLAC absolutely as soon as you can, so that he can relay to us in the MOC at GSFC.
Phone at LISOC MSR: +1 650-926-7900, 7901, 7902
Phone near LAT stations at GSFC MOC: +1 301-286-0866

Go to the GLAST Data Quality Monitoring pages http://glast-ground.slac.stanford.edu/DataQualityMonitoring/.
Select an appropriate time interval, and click on the run ID.
Click on the "Expert" mode in the upper right corner of the left-hand panel. When you're in "Expert" mode, that word will show "Shifter", and when you're in "Shifter mode, that word will show "Expert", just to be confusing.

To find the number of trigger requests per tower in this run or sum of runs, click on
Root > Digi > GEM > TriggerVectors > CAL_HI map(tower)
Root > Digi > GEM > TriggerVectors > CAL_LO map(tower)
Of course, if one GCFE is hot, one of the towers will be significantly out of family.

To see whether CALLO or CALHI TREQ rate is too high, click on
(hmmm, I'm not sure what to do, yet)

To find the distribution of CAL hit occupancy, click on
Root > Digi > CAL > Num. logs hit (LAT)
Root > Digi > CAL > Num. logs hit (tower)
If zero suppression is bad, the minimum occupancy won't be zero. If it's really bad, the peak will be somewhere above zero. Find out which tower is causing the problem, or if all towers are.

To see a 2D histogram of the average hit occupancy per layer and tower, click on
Root > Digi > CAL > CAL Hit map (tower,layer)
or its complement
Root > Digi > CAL > Missed logs map (tower,layer)

Volunteers for this activity: Fred Piron, Eric Nuss, Benoit Lott, Sasha Chekhtman.   Thierry Reposeur & Damien Parent, at SLAC. Dave Smith in Bordeaux. Veronique Pelassa. Please add your name.

2. From the initial LPA runs, "measure" the LAC, FLE, and FHE thresholds.

The useful datasets are a, b, and c above plus
d. the calibOps runs on L+18
e. the First Light Image runs of nomSciOps on L+19 to L+22.

The tools here are code from Zach and Sasha. See Zach's presentation.

Again, please don't wait until datasets d and e are acquired to start.

Volunteers for this activity: Berrie Giebels, David Sanchez, Sasha Chekhtman. Please add your name.

link to LAC, FLE, and FHE thresholds page

3. From the initial LPA runs, measure pedestals, look for temperature dependence.

This is really the first step of the calibGenCAL calibration process.

The useful datasets are a, b, c, d, and e above.

Volunteers for this activity: Aous Abdo, Berrie Giebels, David Sanchez, Sasha Chekhtman. Please add your name.

Pedestal Studies (comparison to cold TVac)

CAL+Peds+TVacPedestal Studies (comparison to Run r0236323982_v000)

4. From the LAC, FLE, and FHE calibration data acquired on L+23 and L+24, measure the LAC, FLE, and FHE threshold DAC calibration curves.

Here the useful datasets are the lacCalib*, fleCalib*, and fheCalib* acquisitions. Many of you have contributed to the analysis code for these data. See Zach's presentation.

Description of the simulated data for the CAL FLE/FHE threshold calibration has been added to https://confluence.slac.stanford.edu/display/ISOC/Detector+Calibration+Sequenceafter the item "Monitoring and control plots"

I'll have a better estimate for you shortly, but you have less than 3 days to analyze those data and deliver threshold DAC calibration curves. Isn't this fun?

Volunteers for this activity: Sasha Chekhtman.

General monitoring thoughts are below

Compare to thoughts I added to https://confluence.slac.stanford.edu/display/DC2/Monitoring on 5 Oct 2006.

Note: I did a lot of rearranging of this section on 13 Mar, but I've added only a few items, mostly in last two sections before the email.

Daily standard plots

This includes a lot more than just CAL info, but there are so many parallel features among the instruments that we might as well list them all.

I haven't specified the sampling time for these strip charts. Not all will be the same. Some could be done every 10 sec, but others require longer summing. TBD TBR.

Environment monitoring

Strip charts of temperatures, say a couple grid points, mean of inner TKRs (top cable temp), mean of corner TKRs, mean of inner CAL baseplates, mean of corner CAL baseplates, a couple ACD temps.  5-min averages are fine.

Strip chart of LAT hardware Configuration state.  I'm not sure quite how to plot this, but plot identity of SIU and EPUs powered, identity of power feeds, ....  I'll flesh this out later.

Strip chart of LAT Mode: Quiescent, Physics, SAA, TOO, AR, HOLD.  Again, not quite sure how to plot this, maybe a color-coded line.

Strip charts of vertical cut-off rigidity or McIlwain L parameter (not very interesting when the LAT is stationary).

Strip charts of attitude information:  pitch angle WRT local zenith, RA and dec of Z, ...

Strip charts of location information:  s/c RA and ec, s/c lat and lon, ...

Here are a few more, added 10 Oct 2007:

Strip chart of Day/Night flag, so we know whether LAT was in the sun at each instant.

Strip chart of zenith angle to Earth's limb. Maybe also some measure of arc length of limb within nominal LAT FOV.

Event classification information, background rates

Strip charts of livetime, deadtime.

Strip charts of trigger sent, discard, prescale, and deadzone rates.

Strip charts of TKR, CAL-LO, CAL-HI, and CNO trigger req rates from the GEM Condition Summary word. In addition, strip chart of TKR && ROI (mostly particles) and TKR && !ROI rates (mostly not particles).    Maybe do all 8 trigger sources.

Strip charts of TKR, CAL-LO, and CAL-HI trigger req rates from each tower, taken from the GEM trigger vector. Strip charts of CNO trigger req rates from each board.

Strip charts of trigger engine rates, reconstructed from dowlinked data that satisfies the event filters.  I realize this is biased info.

Strip charts of rates passed by filters (GFC, MFC, HFC, and DFC). It will take some effort to reconstruct this from the filter bits, I suppose. Note that the DFC has two instances. The first passes periodic triggers, so perhaps its strip chart isn't interesting. But the second passes the "unbiased" TKR && ROI && (don'tCare) trigger rate prescaled by 250:1, so it'll show ~20 Hz average, with good modulation through the orbit.

Detector information

Strip charts of total TKR, total CAL, total ACD occupancy rates (e.g. strip hits per unit time).  Same plots as TKR EMI/EMC test, now in pipeline.

Subpages for strip charts of TKR and CAL occupancy rates by Tower.  Same plots as TKR EMI/EMC test, now in pipeline.

Strip chart of sigma of overall CAL LEX8 and HEX8 pedestal width distributions calculated in Item 1 of 24 Jan 2007 email message below. Measures average noise and its variation through the day, through time.
Note: see comment on time binning below.

Strip chart of mean and rms of the 3 ratio centroids (P/M, P/p, M/m) for CAL calculated in Item 2 of 24 Jan 2007 email message below. Measures global PDA bond stability through time.
Note: see comment on time binning below.

Strip chart of median raw CAL energy sum for events with non-zero CAL energy sum.

Subpages with strips charts of raw CAL energy sum by tower. Same calc as previous item, but by tower.

 Post-recon end-to-end sanity checks (added by Dave Smith, 25 September 2007) (see also 2nd and 3rd slides of this presentation).

 Extrapolate TKR track to CAL. For cleanly hit CsI logs, make histograms of dN/dE and fit with Landau. Histogram Delta X,Y of TKR, CAL positions and fit with gaussian. Store the lists of fit results. Easy to compare histogram of values with a template histogram ; or difference between current values and reference values (Monte Carlo or gold-plated certified reference data run), and thus flag outliers automatically. Easy to make a single summary page. This tests the RDB metadata database contents at the very end of the RECON chain. Dave S did this with his code during I&T, now that GCR.ROOT exists it is probably smarter to use that. We (Fred, and  Dave Thierry Damien) will practice during Ops Sim1, then advise David Paneque how to best add some such to the monitoring variables.

Daily (or orbit) sums or summaries

2d histograms of detector occupancies from digi report

2d histograms of CAL range occupancies (i.e. fraction of times each range is best range)

Many others from digi and recon reports.  More later on this.  Then again, why not leave them in the digi and recon reports?

List of GCFEs that gave no LEX8 data in that day (or orbit, depending on duration of summary report). This flags failed channels for big PIN diodes.

List of GCFEs that gave no HEX1 data in that day (or orbit, depending on duration of summary report). This flags failed channels for small PIN diodes.

Added 6 Apr 07 Occupancy ratio summary. The intent is to find specific GCFEs for which the current occupancy has changed from their typical occupancy. See Occupancy Ratio Summary comment box below for further details.

Added 6 Apr 07 Pedestal noise ratio summary. The intent is to find specific GCFEs for which the pedestal width has changed from its typical value. See Noise Ratio Summary comment box below for further details.

Added 25 June 07 A set of 2D histograms designed to detect suspect CAL threshold settings. See Suspect CALLO and CALHI Finder comment box below for further details.

This section contains mail messages

Date: Wed, 24 Jan 2007 17:06:39 -0500
From: J. Eric Grove <eric.grove@nrl.navy.mil>
To: Borgland Anders <borgland@slac.stanford.edu>,
Alexandre Chekhtman <chehtman@ssd5.nrl.navy.mil>,
Hascall Patrick <hascallp@slac.stanford.edu>
Subject: more CAL plots to add to digi report

Anders buddy,

One of the things we've talked about adding to the digi report is a pedestal summary. But we could also add something that would supplement the calf_mu_trend test in the CAL CPT. Are you still in charge of digi reports?

1. Pedestal, 4 histograms for every LPA run.

Accumulate 12000 histograms (1536 logs x 2 ends x 4 energy ranges) from periodic triggers!! For each log end, fit gaussians to the LEX8 and HEX8 histograms and accumulate the LEX8 and HEX8 gaussian sigma values into histograms. Discard the centroid because it's meaningless. For each log end, calculate the mean and rms of the LEX1 and HEX1 values in the 5 bins surrounding the modes, and accumulate the LEX1 and HEX1 rms values into histograms (and pretend those are gaussian sigmas). Discard the centroid because it's meaningless.

In the digi report, plot 4 histograms (LEX8 sigma, LEX1 sigma, HEX8 sigma, HEX1 sigma), each with 3072 entries, and let root calculate the mean and rms of the pedestal sigmas.

This gives an overall measure of CAL noise, and a sense of whether there are two channels or two hundred channels that are out of
family, for every run. It's similar to something we do in the calu_pedestals_ci test in the LAT/CAL CPT.

2. Optical gain, 3 histograms for every LPA run.

Accumulate 4500 histograms (1536 logs x 3 ratios) from all 4-range readouts that aren't periodic triggers. There aren't many of those events in ground 22x runs, but there are in ground 71x runs, and there will be many on orbit. The three ratios are
i. LEX1(plus face) / LEX1(minus face) = P/M ~ 1
ii. LEX1(plus face) / HEX8 (plus face) = P/p ~ 6
iii. LEX1(minus face) / HEX8(minus face) = M/m ~ 6
Note that each of those quantities is pedestal subtracted, otherwise the ratio doesn't make any sense. And each one needs a cut that rejects an event from this histogram if it's <50 ADC bins (to avoid binning errors). For each of the 1536 logs, fit three gaussians and accumulate the centroid into 3 histograms, one for each of the ratios.

Alternatively, you could do this in energy space from the CAL tuple, but I'll bet there's a causality issue with creating the recon report before the CAL tuple or something.

In the digi (or recon) report, plot the 3 centroid histograms (P/M, P/p, M/m), each with 1536 entries, and let root calculate the mean and rms of the ratio centroids.

This gives an overall measure of whether there were diode bond failures. It won't tell us which CDE is bad, and there are some
systematic problems with certain channels on each AFEE board (remember David figured that out while he was at SLAC?). Note added 12 Mar 07: perhaps we should exclude those known systematic issues from board layout in these histograms.

It won't add much run time to the digi reporting to calculate 16,000 gaussian fits, will it? (smile)

Just a reminder of things to add to your queue.

Thanks,
Eric

Another message

From: grove@ssd5.nrl.navy.mil
Subject: Additions to digi report for CAL
Date: 17 July 2006 5:48:46 PM EDT
To: borgland@slac.stanford.edu, neil.johnson@nrl.navy.mil

Anders,

How about the following additions to the CAL plots for the digi report?

Average ped-subtracted pulse height (LEX8 data only, say)
Average energy (all ranges, but first-range-only for the 4-range readouts)
Average plus/minus ratio, pedestal-subtracted ratios please! (LEX8 only?, and be sure to put 100 ADC bin threshold on each end before
taking ratio)
RMS of plus/minus ratio (same qualifiers: >100 bins ped-subtracted, LEX8 only)
Range occupancy (i.e. 4 plots of fraction of readouts in each range, averaged over xtals in a layer)

Probably others....

Eric

Here are some thoughts about Trigger monitoring.

From: eric.grove@nrl.navy.mil
Subject: trigger monitoring
Date: 26 January 2007 9:47:32 AM EST
To: kocian@slac.stanford.edu

Hi Martin,

I just read through your trigger monitoring slides on confluence.

The real reason I wrote is that I have a few ideas of rate monitoring, so these are comments to pp 5 and 6.

1. Rate v. geomag latitude is fine, but so is rate v. McIlwain L parameter. And I guess what I'd also really do is create a strip chart of the McIlwain L value to run in parallel with the trigger rates.

2. We need to monitor and display on timescales shorter than rate per day or rate per orbit. For CGRO/OSSE, we used 4-sec and 16-sec rates (depending on the data src, but mostly 16-sec rates) in various energy bands and detector components (e.g. spectrometer, shield, and particle anticoincidence det were shown separately). That's because there are interesting celestial and terrestrial phenomena occurring on much shorter timescales that trigger monitors might see. Two examples:
a. solar flares: gammas, particles, geomagnetic disturbances from solar particle events, etc. Features have timescales of seconds, with total event durations of minutes to hours.
b. particle precipitation events, including occasional events from a couple strong radio transmitters (e.g. there's one in Australia). Total event durations are seconds to ~1 minute. GBM will likely interpret these as GRBs.

I'd make strip charts of various trigger rates with, say, 10 or 15-sec binning, with total duration of each plot either the total time span of data in a downlink or a 24-hr day. Heck, probably both. Each downlink would create its monitor plot, and then maybe they could summarized at the end of the day with a single, merged plot.

3. The trigger rates for strip charting that come to mind are below, and I guess I'd derive say 15-sec avgs of each for plotting. I'm listing a lot of plots so presumably we need some hierarchical way to view them.
a. Sent, Discard, Prescaled, and Deadzone rates from the GEM LRS counters.
a. the livetime or deadtime fraction.
b. the 16 trigger engines, derived from the events leaked either by PFC or DFC filters.
c. Bill's favorite monitor of particles, the TKR && ROI && (don'tCare) rate, derived from the leaked events.
d. each of the 8 GEM trigger primitives, derived from the leaked events.
e. each of the individual tower TKR, CALLO, and CALHI trigger rates from the trigger vector, again derived from the leaked events.
f. rates of events accepted by GFC, MFC, HFC, so that'd crudely be the rates of gammas, MIPs, and heavy ions, this time derived from the the events passed by those filters, of course, rather than by leakage.

I guess that the leaked events are the best sample to use to derive rates. Need to think about that.

I like your "Fraction of chg particles over total rate" idea. Maybe ratios of some of these other quantities would be good too.

For the SAA monitor ("trigger rate during last min before and after SAA"), I think the items I've listed in 3 cover that, and I really think we absolutely need continuous rate plots - not just before/after SAA. Another thing that would be useful for the SAA monitor would be avg CAL hit occupancy: the SAA activates 128I in the CsI, which has a ~30-min beta decay with ~2 MeV endpoint energy. Since that endpoint is close to the CAL zero supp threshold, we should see CAL occupancy a bit higher right after SAA exit and decaying with 30-min timescale to the nominal occupancy. Again I guess we select the leaked events to calculate occupancy.

Come to think of it, maybe strip charts of ACD, CAL, and TKR occupancy are useful in general. OK, it's not trigger rate, but it's related to the sources of triggers.

I'd also overlay times of ITS (Immediate Trigger Signal) messages from the GBM, LAT burst alert messages, etc. And indicate the intervals of LAT pointing, whether that means just the pitch from the zenith while we're doing sky survey or the intervals of three-axis pointing (the point there is that the particle exposure is a function of s/c attitude too).

I'll dig up the email I sent to Eric Charles with the daily monitoring plots we used for OSSE, and I'll forward that to you.

Eric

Here's a reminder of the contents of the CGRO/OSSE Daily Standard Plots


See "page 4" of the Daily Standard Plots here.

See "page 5" of the Daily Standard Plots here.

From: grove@ssd5.nrl.navy.mil
Subject: Some GRO daily monitoring plots
Date: 3 November 2006 7:15:22 PM EST
To: echarles@slac.stanford.edu, borgland@slac.stanford.edu
Cc: eduardo@slac.stanford.edu

Eric,

Here's a teaser of the OSSE on-orbit daily standard plots, sent to you without enough explanation. We made a series of plots in the production data processing for every single day of the 9-year mission as part of the data integrity and instrument health monitoring process. Both plots are for day 91/271, i.e. the 271st day of 1991, the entire day. On "page 4" you're seeing count rates in energy bands from detector 4 of 4. On "page 5" you're seeing count rates in ancillary particle detectors. At the top of both pages is a context timeline, with geomagnetic rigidity, marks for SAA times, GRB messages from BATSE (the equivalent of GBM, from the same guys at Marshall), marks for the "primary" and "secondary" sources in each orbit (OSSE viewed two targets per orbit, nominally on complementary sides of the Earth).

Look at page 5.

The SAA appears on page 5 in the CPMPR and CPMEL (Charge Particle Monitor Proton and Electron detector) time series. See that 6 of the 16 orbits each day have a significant SAA passage, with 2 relatively modest SAA passes. Note that they are marked also by the bold bars in the Rigidity time series. BTW, the CPM was a small plastic scintillator (3/4" diameter, 3/4" long) turned on at all times.

Note from the CPD$R* plots (the Charge Particle Detectors 1 through 4) that the orbital particle rate modulation is bigger on non-SAA orbits than on SAA orbits, i.e. the particles that aren't trapped in the belts (i.e. aren't in the SAA) are more strongly modulated on those orbits that don't pass through the SAA. This is an interesting consequence of a 28 deg orbit from an Eastward launch. BTW, the CPD was a plastic scintillator paddle, about 24" in diameter, over the aperture of each OSSE spectroscopy detector (4 spec detectors, so 4 plastic CPDs) used in the trigger logic as an active veto.

Note that by chance (ok, by design) I chose a day with a minor problem with the SAA boundary. See the sharp spikes in the "CPD" rates near 20 hrs. Comparing the times of those spikes with the heavy black lines in the "RIGID" plot (the heavy lines are ground-defined duration of the SAA pases) it's apparent that the Eastern edge of SAA needed to be extended a bit - i.e. the rates were still high at the end of the SAA pass.

Look at page 4.

Here you see the same Rigidity panel, plus a bunch of other rates associated with OSSE spectroscopy detector #4.

The first panel below the rigidity is SHIELD$R4. These are total veto count rates in the four, thick CsI shield segments that surrounded the main spectroscopy detector. The threshold was about 10 or 20 keV, as I recall.

The DTL$R4 panel shows the deadtime in the low-energy channel. I've forgotten the units, I'm embarrased to say.

Next is the neutral particle rate (mostly utter nonsense). The OSSE detectors were NaI-CsI phoswich detectors, with the CsI(Na) acting as an active shield to the rear of the NaI(Tl), which was used as the spectroscopy detector. The detectors were fairly massive: 13" diameter, 4" thick NaI and 3" thick CsI (or maybe I have the Na and Cs thicknesses backwards, it's been a while). From the pulse shape, one can deduce the depth of interaction (or at least deduce whether the scintillation came from NaI, CsI, or a mix). Neutrons have a different pulse shape than gammas, electrons, or protons, and this channel was meant as a neutron monitor. It's heavily polluted with overspill from real gammas, so really all you were looking for here was a big, huge spike above the wandering trend. Ignore it.

The bottom 5 panels are counts per 16 sec (I believe those are the right units) in each of 5 energy bands. For example, the bottom panel ".05 C/4" is the rate of 50-100 keV photons in detector 4, counts per 16 sec. Note the large exponential decays in the bottom three bands, i.e. up to the 300 keV to 3 MeV band. This is the decay of iodine-128 activated in the SAA from the 127I in NaI and CsI. It's a beta+ emitter with a 1.5 or 2 MeV endpoint and a 30ish minute lifetime. Note that we're saved a little from this activation in GLAST by having the CAL zero suppression at 2 MeV, and by the fact that each CsI xtal is a lot smaller than the OSSE phoswich.

In the 3-10 MeV and >10 MeV bands, note the orbital modulation again. Obviously it's not the gammas that are modulated here, but the particles. You're looking here at prompt decays from passing GCRs and residual light.

Unlike in the GLAST band, in the nuclear gamma regime (say 50 keV to 10 MeV), observations are overwhelmingly dominated by local background. The instrument is a lot hotter than the sky. So to observe an astrophysical source, we pointed the collimated detectors at a source for 2 minutes, then scanned off source by a couple collimator attenuation lengths (say 5 deg), then chopped back to source, then off, etc etc etc for 9 long years. There were only a handful of sources that were visible in these 16-sec samples (Crab nebula, ~6 BH binary transients, and one or two neutron star transients, as I recall). So when we saw the 2-minute chopping in these standard plots, we knew we would detect an extremely bright source in the detailed spectroscopy analysis. The other dozens of OSSE sources were visible only in incoherent sums of source-minus-background over one or two weeks of observing.

Eric

  • No labels

6 Comments

  1. What's the time binning for pedestal analysis in Item 1 of 24 Jan 2007 email message?

    With periodic triggers at 2 Hz, one run ~ one orbit will have 7000 events, which is far more than necessary to calculate a good gaussian fit. On the other hand, the 10-sec binning that Anders has presented for some quantities is too short: that's only 20 events. Probably something like one minute (120 events) is a reasonable minimum time.

  2. What's the time binning for the P/M, P/p, and M/m analysis in Item 2 of 24 Jan 2007 email message?

    This is more subjective. For ground muon tests, the noise in the ratios is marginally acceptable in a 10-min muon run at ~500 Hz. On orbit, the rate of 4-range events that are accepted by the HFC is ~30 Hz (or less if we run out of downlink b/w), so it might take ~20 times longer to get marginallly acceptable ratios, or about 200 minutes. Maybe the minimum accum time is thus about 2 orbits.

    Note though that I don't really see a need for these ratios at timescales less than 1 day.

  3. Occupancy Ratio Summary

    If a LAC threshold changes with time (or the noise of the GCFE increases), the fractional occupancy of that CDE will change. We want to detect this change and provide a simple way to identify the changed GCFE. I propose an "occupancy ratio summary" for each downlink and each day.

    The typical fractional occupancy for a given CDE will depend on its location within the LAT, so we need to define a reference fractional occupancy for each CDE. Define
    refOcc(iCDE) = (total number of non-periodic-trigger hits in CDE iCDE on reference day) / (total number of downlinked non-periodic-trigger events on reference day)

    Define the fractional occupancy for each CDE in the current downlink or day:
    occ(iCDE) = (total number of non-periodic-trigger hits in CDE iCDE in current) / (total number of downlinked non-periodic-trigger events in current)

    Then for the current downlink or day, form a 1536-bin histogram (i.e. one bin per CDE) of
    hist(iCDE) = occ(iCDE) / refOcc(iCDE)

    This should be ~1.00 for all GCFEs, of course with some statistical scatter. If a LAC threshold moves or a GCFE gets noisy, the corresponding CDE will move off of the 1.00 trend line.

  4. Noise Ratio Summary

    If the noise of a GCFE increases or decreases (yes, decreases happen), the width of the pedestal distribution will change. Pedestal centroids and widths are measured from periodic triggers. We want to detect this change and provide a simple way to identify the changed GCFE. I propose an "noise ratio summary" for each downlink and each day. Again, the point here is to identify a change, not measure the absolute noise.

    The LEX8 and HEX8 pedestal width (say, RMS) is slightly different for each channel, and there are four GCFEs with out-of-family pedestal width at room temperature. We need to define a reference epoch (say, the pre-environmental Obs CPT for ground test, and an early day on orbit for flight), and save the pedestal width (expressed as rms of the best-fit gaussian, to avoid outliers). Only LEX8 and HEX8 are interesting.

    Then for the current downlink or day, calculate the LEX8 and HEX8 pedestal RMS (again, from a gaussian fit), and form two 3072-bin histogram (i.e. one bin per GCFE, separate histograms for LEX8 and HEX8) of
    hist(iCDE) = rms(iGCFE) / refRms(iGCFE)

    This should be ~1.00 for all GCFEs, of course with some statistical scatter. If the noise amplitude changes, the corresponding GCFE will move off of the 1.00 trend line. The plotting code could also identify outliers by tower, layer, and column.

  5. Suspect CALLO (FLE) or CALHI (FHE) Finder

    These plots are designed to detect suspect CAL threshold settings.

    Create two 2D histograms, one for events with CALLO asserted and CALHI not asserted, and one for events with CALHI asserted (and either state of CALLO). In both plots, Periodic must not be asserted. Each histo should be 3072 bins x ~100 bins. X axis is CAL channel ID that has the highest xtal face energy (must be energy, not ADC bin). Y axis is the log of that highest xtal face energy in the tower that requested the CALLO or CALHI. For the CALLO plot, the energy binning should span 0-300 MeV (if linear binning) or 1-300 MeV (if log binning). For the CALHI plot, the energy binning should span 100 MeV to 3 GeV. A key point is that a 2D bin should be incremented tower-by-tower: for each tower that requests CALLO (or HI), populate the bin for the xtal face in_that_tower only.

    We're trying to find xtal faces that request CALLO or HI more often than their neighbors. Eventually we'll see systematic differences from the acceptance geometry, and we're looking for outliers.

    Because there may be more than one xtal face per tower that causes excess requests, this pair of plots should be replicated for the second-highest xtal face energy and the third-highest xtal face energy, for a total of 6 plots. These last four are lower priority.

  6. Unknown User (fewtrell)

    Cal Threshold calibrationg & monitoring Software Handbook.

    I am attaching a presentation which details all available calibGenCAL software for monitoring & calibrating Cal Thresholds

    Cal Threshold Calibration and Monitoring Software.ppt

    added pedestal calibration info