one correction: cut 1 in HIP filter is of course "less than *3* hits above 50 MeV in each of the first 3 CAL layers" Fred On Tue, 8 Apr 2008, Frederic Piron wrote: Hi Mark, Eric, things are going very fast these days - see the discussion which started last Fri through the ScOps mailing list. I replied to E. Siskind to buy some time so that the CAL people can express their opinion before any decision is made on the FSW parameters. I hope you dont mind I jumped into the discussion like this. But I definitely think we should discuss this in the CAL group a.s.a.p., dont you think ? Fred ------------------------------------------------------------- Frederic Piron Laboratoire de Physique Theorique et Astroparticules UMR 5207 - CNRS/IN2P3 et Universite de Montpellier 2 CC 070 - Batiment 13, Universite de Montpellier 2 Place Eugene Bataillon, F-34095 Montpellier Cedex 5 email: piron@in2p3.fr phone/fax: +33-4.67.14.93.04/41.90 ------------------------------------------------------------- ---------- Forwarded message ---------- Date: Tue, 8 Apr 2008 13:03:04 +0200 (CEST) From: Frederic Piron To: "Siskind, E. J." Cc: Bill Atwood , solist@glast2.Stanford.EDU,   grove eric ,   strickman mark Subject: RE: [SO] Rates from 1-Day v13r9p12 Background Run Hi Eric, those events you mentionned are certainly useful for CAL calibration. Recently Johann studied the event rates passing the different filters, as a preliminary study in view of adding protons and heliums to the GCRcalib process. Using ~900 seconds of the big run, he also found that ~5 to 6Hz of ions pass the GAMMA filter, while only ~2.2Hz pass the HIP filter. Besides there is only 0.23Hz of ions passing both filters. Please have a look at http://confluence.slac.stanford.edu/display/CAL/25+Mar+08+GCR+Meeting+Notes for the full analysis. Our understanding of these numbers is that many ions can't pass the HIP filter due to the two following cuts: 1/ less than 2 hits above 50 MeV in each of the first 3 CAL layers 2/ an energy matching within 20% between energies in these first 3 layers. In the past I showed (see, e.g., page 5 of http://confluence.slac.stanford.edu/download/attachments/36014/Piron_GCR2007-08-03.ppt?version=1) that these cuts affect particularly the heaviest ions. These events rejected by the HIP filter can pass the GAMMA filter since the latter does not include such cuts. On the above confluence page, Johann showed that a cut on abs(1-CalELayer0/CalELayer1)<0.3&&abs(1-CalELayer2/CalELayer1)<0.3 (so a matching within 30%) reduces the ion rate from the GAMMA filter to 1.6 Hz. My guess is that cutting at 20% and adding the cut on multiplicity (like in the HIP filter) will lower this rate down to the 0.23Hz that both filters have in common. My understanding of having these two severe cuts in the HIP filter was to maintain the event rate at a reasonable value, and to comply with the downlink limitations. This decision has been taken about ~1 year ago. However, this already implies that we'll have to wait for months (even years) to collect enough statistics for calibrating the highest CAL energy ranges. As a alternative solution, we (at LPTA) started recently to work on those heaviest ions from the GAMMA filter, to get some statistics back. Thus, we would feel quite uncomfortable if the unique solution that is considered in order to reduce the GAMMA filter bandwith would be rejecting those events. The CAL group meets today at 7:30 AM PT http://confluence.slac.stanford.edu/display/CAL/08+Apr+08+GCR+Meeting+Notes and we'll certainly discuss this issue. Fred On Tue, 8 Apr 2008, Siskind, E. J. wrote: Hi Bill: Let me just remind you that we have another potential handle on the situation - the CNO trigger.  From the short sample of events that I looked at, it looked as if 7-8 Hz of triggers were CNOs (with TKR & ROI) and CAL energy > 20 GeV.  These come through engine 4, and thus are read with 4-range readout.  The mean event length for those guys is 40 kilobits, so this is at least 280 kbps of the downlink budget.  These events seemed to preferentially generate tile hits on at least 3 sides, in addition to the top. To me, if you're looking to shave another 50-100 kbps off the gamma filter output bandwidth (after you've eliminated the zero-energy events), these few events seem like the richest prize.  (To paraphrase Willie Sutton - that's where the bandwidth is.)  Although there certainly is a component in them where the CNO assertion is due to multiple backsplash particles through a single tile, a significant fraction of them could actually be primary CNOs. In summary, they hold the promise of being a sample which can be eliminated with minimal CPU use, an enhanced likelihood of being background, and a significant impact on the downlink budget. ejs -----Original Message----- From: Bill Atwood [mailto:atwood@scipp.ucsc.edu] Sent: Monday, April 07, 2008 6:28 PM To: Siskind, E. J.; solist@glast2.Stanford.EDU Subject: Re: [SO] Rates from 1-Day v13r9p12 Background Run Hi Eric - Simple answer:  neither of what you posit - it both improves the efficiency as well as limiting the PSF tail.    The Pat. Rec. is a combinatoric hypo. generation engine followed by a Kalman hit finder/fit as a hypo. validation step.   Such brute force techniques usually have limited applicability due to excessive cpu usage as the events become more complex.   To compensate we limit the number of trials ( usually << 100),  but by doing so we must focus where to preferentially look.   That's where the Cal Search business comes in.   So its not the case that the Pat. Rec. is just failing to find candidate tracks,  but instead, in the limited amount of searching for the "best" track that is affordable, we've greatly increased the probability that the initial conversion/tracks will be considered. I *think* the best possibilities of improving the OBF to open up bandwidth are: 1) Unpack the ACD information - tally the ACD energy and require ACD_Total_Energy/Cal_Total_Energy  < .008 .  I have done this in the v13r9p12 sample and this cut alone reduces the incoming trigger rate of 2.7 kHz  to 1 kHz.   If done after the current OBF the rate is 277 Hz (down from 420 Hz).   This was one of the CPF analysis revelations in Pass 6. 2) Find number and pattern of ACD hit tiles that signal background.   I cannot investigate this since in the simulation we didn't put a threshold (other  then the zero suppression threshold) on the tile counters - they are present but essentially useless as is.   I have corrected this in the code that is part-and-parcel of the  "Clean-up-Merit" project.     The tile threshold becomes a really issue as the energy increases - why?  - consider what the backsplash x-ray spectrum looks like from a high energy gamma ray shower -  its very steeply falling and so by requiring even as little as a 1/4 of MIP most tiles no longer count.   All this is by way of supporting your intuition that somehow counting tiles from the veto-hit-map (Threshold ~ .4-.5 MIP) along with considering the ACD Face information could result in an improvement.   I will need new AG runs & background runs with the re-worked AcdValsTool to tell how much. 3) Put an energy threshold from the Calorimeter for inclusion in downlink. This is an old story and its being implemented - but it does seem that we will need (if not desire) to have more.    One should note here that to the extent we can set the downlink threshold (and why not?) we can set the rate to be as low as we want (need) albeit at the sacrifice of the low energy physics (think GRB time structure). -  Bill ----- Original Message ----- From: "Siskind, E. J." To: "Bill Atwood" ; Sent: Monday, April 07, 2008 2:09 PM Subject: RE: [SO] Rates from 1-Day v13r9p12 Background Run Hi Bill: The salient question is one of whether this Cal Search method only improves the high energy PSF, or lowers the efficiency as well.  If it lowers the efficiency, then it presumably does so by rejecting some of the confused events.  In other words, does this method improve the high E PSF by adding another constraint which shrinks its width, or preferentially removing events from the tail, or a mix of the two? If it removes some events entirely, then the question becomes one of whether you can preferentially locate those removed events in the OBF, and remove them before they consume downlink bandwidth. Cheers, ejs -----Original Message----- From: Bill Atwood [mailto:atwood@scipp.ucsc.edu] Sent: Monday, April 07, 2008 5:00 PM To: Siskind, E. J.; solist@glast2.Stanford.EDU Subject: Re: [SO] Rates from 1-Day v13r9p12 Background Run Hi Eric - Thanks for the careful breakdown of how we've over consumed our bandwidth allocation.    I do want to address one concern you raise with the recon of high energy events.    We have purposely built the reconstruction to minimize confusion in such events.   This is done in 2 places.  First, in the Pat. Rec.  as the energy get high (e.g. > ~ 1 GeV) we start to limit the where we search for the gamma conversion.    We do this by estimating both the direction as well location of the incident gamma from the calorimeter (which provides an energy centroid as well as the direction of the shower access via a moments analysis).    In the Pat. Rec. most of the tracks are found using the "Cal Search" method.   By projecting the calorimeter trajectory back into the tracker - we limit which valid x,y pairs of clusters to consider as the start of the track.  As the energy increases the limit becomes more sever as the calorimeter direction and location become more precise (the Cal direction PSF is ~ 2 deg.  - 68% containment at energies > 10 GeV).    Prior to doing this there was dreadfully long tail on the PSF at high energies - exactly what I think you were worried about. The second place where we try to limit the damage caused by "lots of hits" is in the event analysis.   At high energy,  the ability of the ACD to veto an event is limited essentially to the ACD tile the first (best and longest) track points at.  And then it is only considered in play if there are less the 2 blank layers of silicon between the start of the track and the tile (this is the so-called SSDVeto).     So in order to (self) veto such events, not only does the tile being pointed have to have a pulse height well above zero,  the track must start close to the outside of the tracker volumn (the SSDVeto requirement). With this said however - it should be noted that we have not checked on how many events we would loose if we gave up the high energy by-pass in the OBF. Maybe there's a way to test this with current GLEAM AG runs.    I don't know how to manipulate the bits in the various summary words to do it myself with out help.     -   Bill ----- Original Message ----- From: "Siskind, E. J." To: Sent: Saturday, April 05, 2008 10:05 AM Subject: RE: [SO] Rates from 1-Day v13r9p12 Background Run Try looking at https://confluence.slac.stanford.edu/download/attachments/13899/CAmeeting_12172007_PDSmith_OnboardFilter.pdf, especially slide 3. ejs -----Original Message----- From: Bloom, Elliott Sent: Saturday, April 05, 2008 12:56 PM To: Siskind, E. J.; 'atwood@scipp.ucsc.edu'; 'pdsmith@mps.ohio-state.edu'; 'solist@glast2.Stanford.EDU' Subject: RE: [SO] Rates from 1-Day v13r9p12 Background Run Hi Eric, It would be useful to see a discriminator curve on the total energy from 15 GeV to 40 GeV. Best, Elliott -----Original Message----- From: Siskind, E. J. Sent: Friday, April 04, 2008 10:30 PM To: Bloom, Elliott; 'atwood@scipp.ucsc.edu'; 'pdsmith@mps.ohio-state.edu'; 'solist@glast2.Stanford.EDU' Subject: RE: [SO] Rates from 1-Day v13r9p12 Background Run Hi Elliott: That's certainly true, but raising that threshold from 20 GeV to infinity only actually dropped the high-E event rate by 30% of the total.  The rest of the events get through - mostly because the track extrapolation through the ACD in the late stages of the gamma filter is effectively disabled once the CAL energy reaches (another arbitrary value of) 30 GeV because of backsplash concerns. Basically, I'm trying to encourage people to consider whether you can develop a test (that doesn't consume much CPU time) which is more finely targeted than the rather blunt instrument of simply raising that high-E cut.  Ideally, one would like something which reduces the high-E GCR and CNO contamination more than the high-E gamma signal (i.e. raising the high-E cut doesn't increase the signal to noise in the events passing the cut, and you lose a large number of gammas along with the background which you do eliminate by taking that route).  In lieu of that, you'd like to preferentially eliminate either those events which are likely to be induced by heavy ions (e.g. by rejecting CNO events above some CAL energy threshold and with lots of ACD hits) or events which have so much backsplash in so many places that you aren't likely to be able to pick out the primary gamma virtex and the electron-positron pair exiting that virtex, and thus reconstruct RA and Dec (e.g. by making cu ts based on the number and spatial distribution of struck ACD tiles). However, perhaps I'm overly optimistic in my expectations - I'm certainly aware that people have been thinking about the design of the gamma filter for yers. Cheers, ejs -----Original Message----- From: Bloom, Elliott Sent: Saturday, April 05, 2008 1:11 AM To: Siskind, E. J.; atwood@scipp.ucsc.edu; pdsmith@mps.ohio-state.edu; solist@glast2.Stanford.EDU Subject: RE: [SO] Rates from 1-Day v13r9p12 Background Run Hi Eric, The way to lower rate from the total energy trigger is to raise the threshold of the total gamma energy. Perhaps 25 GeV is where we need to start. 20 GeV was somewhat arbitrary, though we would like to have this threshold as low as possible. Best, Elliott -----Original Message----- From: Siskind, E. J. [mailto:ejs@slac.stanford.edu] Sent: Friday, April 04, 2008 5:59 PM To: atwood@scipp.ucsc.edu; pdsmith@mps.ohio-state.edu; solist@glast2.Stanford.EDU Subject: RE: [SO] Rates from 1-Day v13r9p12 Background Run Two additional points: 1) Note that the OSU results come from summing 5 seconds of low background data and 3 seconds of high background data.  I previously mentioned that the mean of Gregg's latest numbers without correction for the off-diagonal terms was 1421 kbps.  If I correct these numbers for the off-diagonal contributions and then take a 5:3 weighted average, I arrive at 1403 kbps - in excellent agreement with the OSU calculation (which actually looks like an integral of 1413 kbps).  This also emphasizes the need for taking the proper orbit average of the background. 2) We're all well aware that there is a component in the OSU calculation of ~380 kbps to be gained by excluding the zero-energy events from the gamma filter output.  My previous calculation suggesting that we need to be down around 1000 kbps while actually in a data-collection run in order to meet the budget for the orbit-averaged total downlink bandwidth suggests that even after reaping the benefits of eliminating these zero-energy events, we're still a bit over budget - or certainly have no remaining margin.  (From the second OSU plot - bottom right in slide 7 of Gregg's talk: 1413 - 379 = 1024 mbps.)  It is for this reason that I suggest investigating methods for cutting additional events from the gamma filter output - hopefully without significant detriment to the acceptance for the reconstructable gamma signal.  Even a 10% cut in the bandwidth from the E > 20 GeV events ought to give us at least another 50 kbps of margin to work with. ejs -----Original Message----- From: Bill Atwood [mailto:atwood@scipp.ucsc.edu] Sent: Friday, April 04, 2008 2:05 PM To: pdsmith@mps.ohio-state.edu; solist@glast2.Stanford.EDU Subject: [SO] Rates from 1-Day v13r9p12 Background Run The attached ppt file shows the rates in the first 77 runs of the current 1-Day sample and pie-charts showing the composition of the rates by source type..   I was indeed wrong when I said the McSourceId = 7000 were Neutrons. They are (as Patrick Smith has found) Earth10: Albedo Gammas.    The flow of the rates I see is  consistent with what we've had before. So this leave the question: why are we just finding out now that we're ~ 50% over subscribed in Bandwidth.   -  Bill