Blog

BT Meeting Minutes 7 May

Results from pedestal temperature corrections

LSR: is the twr3 systematics understood there?
AC: there is some backtail of the energy deposition, they are all negative, so that is beam induced bias from the beam. this run was taken with 500MeV beam, so the bias is 0.3MeV so really small. and since it happens at both ends of thextal it should not affect the position measurement. when we have the beam, the pedestal is a bit different, that is clear, but it is a small correction

New sims - Johan

JB: actually run 2039, 50GeV electron at 0 deg in twr2. LAC distribution for data, old and new MC ; the peak is much closer to the data now.
3: some relevant distributions (data in black, old MC in red, new MC in blue). CalNumHit is now matching with data! As I have said, in this new sim we have both realistic LAC thresholds and LPM off. For these distributions the main effect comes from LAC thresholds. CAlTransRms also benefits from corrected LAC thresholds, while the energy distribution is not improved
3: other variables greatly improved by corrected LAC thresholds, I did not expect that, this is very good. Lowering the LAC threshold really makes a big difference, and we are almost matching data, which is obvously very good.
5: conclusions. You can check all the other relevant merit quantities and the improvement in data-MC agreement from the BtSystest report linked in the agenda

EB: in 2 data disagree with MC, it is better than it was, but the distribution is much different! I wonder if this is the cause for the residual difference in data-MC comparison. Can we not just input the real values in the MC?

JB: maybe Sasha can comment on the broader LAC distribution in data, we decided not to make more tuning other than moving the average, the reason being that data distribution is broader for pedestal drift and things like that, so it is a difficult effect to implement in the MC. For now I would say it is good enough and it is much more important to understand what data will look like when we take into account the pedestal correction that Sasha made, which is NOT in this data I showed today

AC: johan is right, the spread of LAC threshold is dure to T drift evidently. in principle we can correct it, in principle we need individual LAC threshold for individual run and xtal, so we started with correcting with a scaling factor. it is possible to do it anyway

JB: we first want to understand the effect of T correction on the data before making more tuning in the LAC threshold in the simulation

EB: in principle you are correct, but looking at slide 3 you can see there are significant improvements

AC: what is the different between CalTKrXtalRms and CalTransRms?

LL: CalTkrXtalRMs and related (Trunc, TrunE ...) are similar calculation to CalTransRms but using crystals around the track projection in the CAL in a volume that is different for different variables. If you remmeber my study presented last monday at C&A, these are the variables that when stretched caused the highest leakage of background events throught the CALRequirement prefilter in the rejection, so I am particularly pleased that these are back on track. Sasha, what is the plan to monitor the LAC threshold on orbit? how frequently do we plan to calibrate them? or how many different MC runs should we produce with different LAC threshold, that take into account expected temperature variations, to make the bkg rejection cuts immune to these changes?

AC: we will have a dedicated run to calibrate LACs close to launch, then will generate a set of LAC thresholds close to MeV, then we rely on the monitoring program and we will have info for each single orbit. we will measure lac threshold and calibrate them to any nominal value we want, so we do not need to produce different simulations, we just need to correctly calibrate them and monitor them

DP: did you mention this will only work if the 2 thresholds are more or less equal? why?
AC: if difference is too big you just do not see the threshold as it never shows up

BT Analysis memo

MNM: I am concerned that the recent improvements change the hadronic physics list studies
EB: good idea to do this, there should be a section on continuing issues
JB: something to be well understood is that this is not the end of the analysis, but to provide the status to the collaboration
LL: true, but we should refrain from the temptation of writing a 100 pages memo which just collect existing material (very easy to do). we need to provide complete and concise information that could go into a paper, that we need as a reference to support our strategy for calibrating the LAT with extensive use of MC. We can expand an internal memo with more details, but we should not forget we are aiming at a paper. So we will have to redo a lot (if not all) of the studies that are listed here, including hadronic phisycs studies, not sure we can redo everything, but it is important that we reprocess data, generate udpated simulations, rely on existing BtSysTest and analysis to remake the most and finally make the effort to summarize for the benefit of the collaboration and for getting to a paper

EB: about the LPM effect, it is definitely something that we should see, so just disabling it is not good, and as you see from johan's plots it really makes a big effect. is there a way to put pressure on g4 developers to re-implement correctly the LPM effect? do we have any estimate on when this is done?
JB: they work under no pressure, if you really want it, they would say just reimplement it yourself
LL: what was the story with the multiple scattering bug? did we have somebody from our side to reimplement it correctly?
LSR: for the multiple scattering we got back to an old working implementation, I do not think this is possiblefor the LPM, we have no evidence that it ever worked. Johan, there are several effects in the LPM, do you know if G4 people plan to work on all of them?
JB: please send me an email with details and I will fwd it to Vladimir to check about his plans

Unedited notes, corrections and additions are welcome, Luca (LL)

News

LPM effect
LSR: LPM effect exists and we should see it, turning it off should not be the solution.
thin target effect, again largest for high Z material, also suppresses brem, we should not consider this solved until we get a coorect implementation
JB: you are right, I never said the problem is solved, we just know the current implementaiton is not correct and the G4 developers advised us to do so. we already saw we need some LPM effect, starting from 100GeV or so. will have to work with our data and the g4 team to implement this correctly
BG: turning LPM off is like having an upper limit, right?
JB: yes
BG: whatever we get when we switch it on will be less than what we have with LPM off?
JB: somerhing that philippe brought up some times, is the connection between hits agreement and extra material

Temperature correction for CAL pedestal
see the posted summary from Sasha
BG: any chance to get pedestal correction for some proton runs where I saw drift of MIP peak?
AC: you can apply to any run and the pedestal will be correct,except that I do not correct for rate effect
BG: took care of that, picked the low rate runs and cut on GemDeltaEvtTime
PB: the idea is to get a stable BT release, and then reprocess all the data

LL: Johan and Francesco will take care of the reprocessing?
JB: yes, we can do it on the pipeline or locally
LL: either way, we should advertise the reprocessed data to have more eyes on those

AC: I just noticed some strange things related to T measurement, namely time; it looks like time in start-time column in run db not always match time in logbook. a 9 hours shift appear and disappeared afterwards, around august 9

JB: what happened I think is that we changed the time of the BTserver, where the HKP lookgbook was residing. we changed it from SLAC time to european time at users request.

AC: another thing we noticed, there was some strange change in T, like an abrupt step up and down of about 1 deg, definitely not real cooling and heating, happened in 5 minutes, so it is too fast, it seems to be correlated with power up and down. it seems that even if CU is powered down, temperature is still measured

CS: T monitor are on the cables, and can be readout with FE off
AC: but it changes immediately, it cannot be heating from the FE
CS: yes, we observed that during TVAC tests, there is some sort of xtalk that changes the T value when switching on or off the FE

PB: what are the 4 columns in the T file?
AC: 4 sensors
PB: 5 sep gives no data for two towers
AC: looks like tower 2 or 3 gave no T for that day
JB: for some time we ran with some special config with just 1 TEM on, I think that was a CPT
AC: anyway it seems that for all periods when we took data we have T available

Data reprocessing with EM hypothesis

2: black is behind red
3: the only differences are in the Kalman filter related variables
4: some more plots
5: conclusions. we should use the same hypothesis in data and MC

PB: we should use the EM hypothesis for gamma and electron runs, not for hadron runs
CS: right, i think we have the same discrepancy (MC flag and data flag) for the electron runs, did not check the hadron runs
PB: we need to have the same hypothesis for MC and data
JB: we always used MIP for data and EM for MC
LSR: just to remind people of what this is, obviously for MIP hypothesis there is a dE/dx calculation mainly coming in for the kalman energy estimate; for the EM hypothesis, we assume the energy deposition goes down along the layers by an exponential in rad. lengths. So for the same track, the kalman energy will be higher for the e-radloss hypothesis. This affects the fit itself which uses this info. to partition the energy between the vertex tracks. so you should see some 2nd order effect from that

News

Sasha (AC): just a comment on those slides, the CsI density of 4.51 is correct, as we have found out later, so do not change that. Aous is looking at T for those runs using pedestal drift, and he got rather high T, around 29-30, so real T would be really interesting to look at, if T is that high there will be some effect. At the end of SPS we had similar T, so if we made muon calib at the same T, light yeld would be the same, while since the PS runs were mostly taken with lower T (22), we would see differences for those runs, although in the wrong direction, i.e. we should have lower signal. So it will not compensate for all our discrepancy, and possibly go in the other direction, so it is important that we get this information

Low E simulations - Carmelo (CS)

2: these runs were simulated with both std and LE physics. we observe no big effect below 10GeV, as expected, but we find a surprising increase in number if hits which gets closer to the data as energy increases. I have put all reports here http://www.pi.infn.it/~sgro/reports_EleLowEnergy_16_4_08.zip

3: left is tower not hit by the beam, right is tower hit by the beam; top is cluster, bottom is hits. interesting to notice that red curve for cluster matches better data (black)

4: plotted the ratio and the hit and cluster profile, again 5GeV gives no big change, 50 gives better agreement for LE physics

5: some cluster variables from the merit, same behaviour

6: some CAL variables show no difference between std and LE physics

Leon (LSR): very interesting, in physics lists that emphasize LE behaviour, it might make sense to test with models of the detector that have shorter range cuts and more fine granularity in the geometry
CS: we have used std cuts
Luca (LL): what do you mean by finer geometry?
LSR: we made tests with realistic honeycomb geometry and glue dots which showed no changes, it would be interesting to test LE physics with such models
CS: the honeycomb tests were done with 1 and 10GeV, away from these energy, would be good to redo those tests

Philippe (PB): we know, at least for energy discrepancy, that we will need to add some material in front of the CU, I showed some time ago that if you add 10%X0 the cluster nb gets into agreement with data, so at least for 50GeV, where you have a perfect matching, you will loose agreement with extra material - so we need to keep this in mind (see https://confluence.slac.stanford.edu/download/attachments/13893/beamtestmeeting_20071107.pdf?version=1)

Johan (JB): i do not remember that, but at some point there was some plot suggesting that at SPS we might have too much mnaterial in the beam line, which was provoking some spread of the beam with wrong divergence, so we do not know yet if and how much material we have to add for the SPS simualtion. For cuts, we use 10um, as usual, and showed many times that going below this does not make any change; i am currently in touch wiht Vladimir Ivantchenko at CERN, and he is suggegsting some test with configurable parameters to understand where these changes come from. I plan to make these test in the CU standalone tower simulation, at that point it will make sense to test different geometries as Leon suggest

David (DP): I had found no difference when changing production cuts, although I did not check with LE phsyics (see here https://confluence.slac.stanford.edu/download/attachments/4096462/Check_ProdTh_Geant4_2008_02_06.pdf?version=1). Did you check what happens with runs with energy>50GeV?
CS: yes, you can find plots for all those runs in the link I pasted in the chat window, I just selected two representative runs
DP: is the agreement that you find good at high energy? is there a different behaviour?
CS: agreement is good at any energy>20GeV; there is something strange in the sim at the highest energy, slightly worse than 50GeV, but anyway better than std physics

Only suggestions and discussions items are in these notes, I did not take notes in realtime

New runs with realistic TKR noise occupancy (10 times lower than earlier sims)

  • old simulations give more hits than data, contrary to previous analysis from nicola's - check cuts and software versions
  • compare with leon studies on threshold changes - he did not do systematic studies with different occupancies but only changed strip threshold

Update on CalLkHdEnergy - Yvonne

PB: you should use CTBCORE>0.1, we always use that cut in the analysis to ensure reasonable reconstruction
BG: distributions are asymmetric, you should use the peak instead of the average value
EB: add some delta beam for GR and check if we see the same effect
PB: no reason to expect CTBBEstEnergy to behave the same for CU and LAT, for example slide 17 would not look the same for the LAT
PB: we should probably limit the CT selection close to the boundary of CalLkHdEnergy definition
EB: for IRFs, pay attention that CTB smears selection, very important for DM. Maybe reconsider the strategy for determining the best energy?

Temperature effects on light yeld - Sasha

PB: is temperature effect correction in the code? do we plan to make it before launch?
AC: T should be fairly stable in orbit, no need not to correct, but I can put this correction in for CU analysis within a week for the light yeld; I need to know the temperature for the runs taken in Pisa during muon gain calibration
LL: will make sure you get them, sorry that you do not have them yet

BT paper discussion - Summary

We will write a note for internal use of the collaboration and reconsider publishing a paper from that note depending on results; we will include latest changes from T correction but most likely not the new CU calibration, which will likely happen after launch.
Luca and Philippe will do the drafting and ask people to contribute and proof-read it. Elliott volunteer to proofread the draft. This should not change our focus on constraining uncertainties and improving data-MC agreement and analysis in general, even so if we decide to publish a short paper out of the note

Steps towards BT paper

With approaching of launch, we should consider our strategy for publication of BT results before all our efforts are dispersed.
I believe we should identify the path towards a new assessment of the CAL discrepancies (energy scale, caltransrms, longitudinal position measurement) - below is a first list to discuss:

  • analysis with updated pedestal files - Sasha
  • new CU calibration - SLAC
  • reprocess of key runs using the above udpates - Johan, Franz
  • re-evaluation of discrepancies - all

When this is done, depending on the residual discrepancies, we should consider the following options:

  1. residual discrepancies at few% level: we could publish a short paper with the current status and the basic data-MC comparison plots; in order to minimize the effort, I would discuss this only and refer to the GLAST symposium paper for description of the setup and dataset. Pro is that we release the pressure on us for releasing our analysis to the collaboration, we have a paper for reference that ensure that our performance parameterization based on MC is grossly under control; con is that we would presumably prefer to publish better results after such a big effort and we may not find the energy to finalize the analysis tu sub-% level
  2. no significant improvement in the agreement: continue analyzing and delay publication until we have a good agreement. Pro is that we look for a final solution, con is that we have no clear idea of when this is possible and we will have less and less time/people to work on this
  3. no significant improvement in the agreement: publish a short note where we honestly state the status of our analysis (8-10% discrepancy), conclude that we are currently dominated by uncertainties in the CU calibration and are working on that. This would require at least, in my opinion, that we prepare two very clear and well motivated statements: i) explain why we believe the CAL calibration on orbit will be better than the CU calibration with a beam and ii) provide some initial indication that the discrepancies are not so critical for bkg rejection through the study of stretched variables datasets

We should discuss this during the meeting and provide feedback to the publication board on what the group thinks.

BT January Newsletter

Beam Test January NewsLetter contribution

Simulation highlights

  • BT simulations were successfully ported to the Pipeline II environment by Johan; this ensures a consistent platform with other LAT simulations for data generation, archival and storage, which will allow efficient use of the BT data well after launch. A number of test simulations are now available from the data catalog, and any new simulation will be generated using Pipeline II.
  • the H4 line simulation was double-checked and updated by Francesco as to extend the simulation up to the last bending magnet in the line. No significant difference was found in the energy distribution
  • a comprehensive comparison with EGS5 from David showed very good agreement with Geant4
  • scaling variables: we started looking into ways for scaling the MC data to match our data; Johan explored the consequences of changing CAL calibration constants, Luca scaled the ntuple variables and started looking at consequences of the scaling in the event classification, Carmelo showed the consequences of changing the incoming beam energy by the measured discrepancy. We will have further discussions on this topic in the CA-SO workshop in Bari, and plan to agree on a way to implement this

Analysis highlights

  • bad position measurements in the CAL were studied by Philippe and Sasha; a possible explanation for these could be event pile-up, but it requires some more analysis to be confirmed
  • TKR cuts are being revised in the light of the new variables introduced for the neutral energy events
  • LAC thresholds behave in a different way in data and simulation : potentially due to pedestal drift, issue is under investigation
BT December Newsletter

The status of the beam test analysis was summarized in the last collaboration meeting in the session devoted to illustrate all the source of systematics that can impact our science anlysis. The focus was on clarifying which results are complete and solid and what are the open issues which we still have to address. It is clear that the current difference in the calorimeter variables (see below) is a concern and calls for a final effort for being solved. In parallel, we should constrain that into a quantitative statement on the resulting systematic error on the energy measurement, and a similar approach should be pursued for the TKR variables.
Below are some details of the recent progress.

Simulation status: we are moving to the v71215p0 release, with the following major changes

  • pass5 variables from the current event classification analysis are added to the Merit Tuple and replace old pass4 variables; this allow a subsystem specific data-MC comparison for the relevant variables for background rejection
  • vertex variables in the tuples are modified to describe also neutral energy events; this requires a thorough check of the usual TKR cuts that were used so far in the TKR analysis, and in particular Tkr1 variables should replace Vtx variables for analysis of all charged tracks
  • list of golden runs : we are adding some new reference runs after we discovered that many of the runs we used are showing a weird structure in the TkrYdir_vs_TkrY correlation plot as a consequence of a large dead area in plane 35 of Tower 2 that the beam crossed and noisy strips around that area
  • a pressure scan in the cerenkov detectors along the SPS H4 line was performed as a quick way to simulate effects of different material along the line
  • the high energy SPS runs (E>5GeV) were re-simulated with the correct pressure (0.1bar) and gas (He) in the cerenkov detectors along the H4 line
  • along with moving to v71215p0, synchronized with GR v12r15, Beamtest MC software is going to be updated to run on the pipeline II, in particular to use the new standard ways to store data on xroots and access them via the data catalog.

Analysis highlights:

  • geant4-EGS5 comparison: a first check was documented by David Paneque, and shows a reassuring agreement for both the longitudinal and transverse EM shower shape, at least for a simple CAL geometry with no gaps
  • but we heard that Babar had seen similar EM shower shape issues between their beam test Data and GEANT4 MC
  • studies with extra material along the beamline : the average energy overestimation between data and MC is about 7%, but it is higher for the first layers of the calorimeter than for the last; this seems to indicate that the showers start earlier in the data than in the MC, tracing extra material along the beamline that is not described in the simulation. We have tested this idea by simulating different pressures in the Cherenkov and looking for the amount of extra-material that could account for the current disagreement between data and MC. When considering most of the SPS configurations, we find that 10% of radiation length would help, though it doesn't solve the discrepancy for all configurations and only 5% of radiation length seems necessary for the tracker variables
  • a preliminary comparison of the pass5 variables performed within Insightful Miner for a specific run shows that these are fairly well reproduced in the MC, except for the calorimeter related variables. This is certainly due to the transverse size of the showers which is larger in the data than in the MC.
  • the PSF with tagged photons was recomputed and shown to agree with the full bremsstrahlung analysis when using the same cuts
  • TKR hits and cluster summary: a different behaviour with photons wrt electrons was observed after the simulation/reprocessing with the current BTRelease (v7r11117p1); while electrons show the usual lower number of hits and cluster in the MC, photons have now a larger number of hits; the analysis must be repeated with updated cuts. Scans along X and Y show a negligible dependence of the discrepancy with the position
  • high energy electrons : an initial stab at modifying CAL variables to match the data was tested in the context of an event classification study to tag high energy electrons, and it was shown that the effect of the current discprepancy, mostly from the CalTransRms variable, on the algorithm that separated electrons from hadrons is non-negligible

Philippe Bruel
Luca Latronico

BT Meeting minutes

I did not take detailed minutes as I normally do, just had time to write some comments that I think are worth keeping to the records

  • Event selection cuts:
    • Bill warns that the vertex selection should not be used blindly, in particular if we are in sync with the latest GR versions wehere we added the neutral energy events variables, which effectively doubles the number of vertices. His suggestion is to look at the vertex status bits and verify that the cuts we use do what we think
  • Simulation status
    • Luca suggest to change benchmark run for producing different MC flavors, and move away from 2082 which was shown to have some possible issue in the beam phase space plots
  • Collimator simulation
    • Bill commented that an actual collimator is much heavier than what was simulated and shown by Johan; he requested a thorough check of the beam line simulation
    • Francesco will provide documentation on the current simulation which indeed includes all the material along the beamline, but does not simulate the actual beamline in its full length; we will consider such a simulation
  • electron/hadron separation
    • Francesco presented a study to separate these two classes of events based on max likelihood method
    • Alex requested details for the calculation of the likelihood as he thinks the method is not applicable since you have to know particle energies a priori
    • Francesco and Nicola clarified that the purpose of the study is to make a data-MC agreement study, not to identify electrons in flight
  • 2082 beam spot issue
    • Nicola suggest it is due to a weird combination of noise strips in layers 31 and 35
    • Leon thinks we should keep that in mind as we treat noisy strips with different thresholds in DAQ and in offline reconstruction software
BT Sim Status 28 nov 07

Simulation Status

  • Pass5 variables in BTRelease tag v7r1215p0, in sync GR v12r15, with following exceptions:
    • G4Generator v5r19p0 and G4HadronSim v1r2p0, Ion Physics for QGSP_BERT
    • BR continues to use AcdUtil v1r4p1, AcdDigi v1r20, and AcdRecon v3r7p2 (tags of GR v11r17), because with newer tags the code terminates in AcdUtil::AcdTileDim().
    • xmlGeoDbs v1r47p1, air instead of the virtual calorimeter in bay 0
    • In the release manager, all compiles, and several test programs (more than usual) core dump. Probably incompatibility of the older AcdXxx and the new xmlGeoDbs tag due to changed ACD tile geometry, for the LAT model! However, BR seems to run. ldf2digi and readigi_runrecon on one PS (700001460) and one SPS (700002082) run reproduced previous root files. But still, use with care!
  • New simulations for special configuration of run 2082
  • Dead strips are in MC, see for example run 2082 in Tower 2 that has a large dead area on plane 31
  • SPS veto counter simulation bug:
BT Work plans

Meeting with Bill, Steve, Aous, Riccardo, Eric C, Philippe, Luca, Johan, Leon, Robert Johnson during the collaboration meeting to discuss BT and background rejection

  • Effects of BT discrepancies on background rejection:
    • Michael et al working on a new release to include pass5 variables into BTRelease; fixing some issues with tagger variables as well; now working on issues from synchornization with GR
    • the plan is to check CTBTKR/CAL/GamProbs in the CU and go back to more raw variables in case of differences; prefilter cuts should be first checked; CTBCPFGamProb is not applicable to CU geometry given the few ACD tiles available
    • Riccardo is documenting prefilter cuts and relative variable importance for pass5 and will maintain a page for further passes - work in progress
    • several people expressed interest for studying a specific subsystem set of variables, which will eventually help shaping a core team of background rejection analyst - more are welcome
    • open discussion on which tool should be used to deal with classification analysis (IM, RForest, orange, ROOT); Bill strongly suggest IM, at least for a softer learning curve and easier comparison with his analysis

Some suggestions from Bill and Steve discussed yesterday

  • Background rejection variables
    • Bill looked at runs 2082 (20GeV e, 0 deg) and 1445 (full-brem gammas) and ran his CT analysis for data and MC; preliminary indications give a small effect on CTBTKRGamProb, a 7% difference on CTBCALGamProb but a negligible effect on the final event classification, i.e. CTBClassLevel; this is encouraging
  • Tkr variables
    • Bill wants to independetly double-check the beam cleanliness by hand-scanning some events; we discussed on the possibility of providing such a sample set of events along with systemtest or through the pipeline (Tony should be able to run WIRED on the web from user requests on specific runs and number of events)
  • Energy scale discrepancy
    • Bill is reassured by the Geant4-EGS5 comparison, he believe we have a miscalibration somewhere, either in the beamline settings or in the CAL calibration
    • Steve is looking into the CAL calibration procedure with help from Sasha and Philippe
    • Bill requested an evaluation of the overall uncertainty on the absolute CAL calibration from the uncertainties in the various steps of the calibration
    • Philippe will finalize his analysis of calibration factors to be used for i) scaling CAL variables in the data-like simulation ii) use them to recalibrate the LAT CAL if needed

BT Meeting, november 7

Unedited notes taken during the meeting, corrections and additions are welcome LL

News

PB: official launch date is end of may, see here

Pressure scan analysis - Philippe

1: I was working on updated runs. i used this scan as a scan in extra x0, not with the aim of determine the correct pressure, which we know from the records. current sim is not exactly what we had at SPS, difference should be negligible anyway
2: 4 scenarios considered in the analysis, increasing complexity
3: scenario 1, one plot per configuration, differences between data and MC for each layer, the fit pressure is written in the plot, 2bar. note that at 0 and 20 deg we have behaviour which decreases with layer, while it is the opposite at 10 and 30 deg (not always obvious, but there is a trend). a negative slope would indicate more material in data wrt MC, a positive slope is the opposite
4: scenarios 1 and 4 (in blue). obviousloy chi-sq is much better, and max discrepancy is about 4% and is quite flat. BUT
5: plotting the pressure for different angles you see a similar structure for different energies (left plot); right plot is more flast vs energy, so somehow there is an indication that scenario 3 (pressure per angle) is favoured wrt scenario 2 (pressure per energy)
6: scenario 3.
7: tkrtotalhits from merit (in fact clusters) horizontal red line is data, blue vertical is best fit result for tkrtotalhit, red vertical is fit result from scenario 3
8: same as 7 but on tkr1corehc; best fit (blue line) is now 0.3bar. note 10 gev, 20 deg, data is always below the scan points, so no hope to find a scale factor and a pressure that minimizes the discrepancy
9: fitting caltransrms, we are in trouble here; adding extra material will not solve caltransrms discrepancy
10: since tkrtotalhits require 1 bar, i looked at cal layer energy in scenario 1 using 1 bar, and you can see we have an un satisfactory situation
11: extra material fit for tkrtotalhits would indicate extra 0.05x0, not crazy, but layer energies would require a pressure (i.e. material) per angle. all best fit results for cal layer energies are compatible with tkr best fit, but there is no coherent solution. still have to to look at 200,280 gev data. and caltransrms is anyway out

NM: in slide 10 there is a problem on layer 3 at 20 and 50 gev at 10 deg, did you check that?
PB: checked that, not an issue with fitting
NM: any problem in data or MC?
PB: done that check, no pb there
NM: 10 deg is still in twr2, like 0 deg. anyway conclusion should be 2 bar for cal layers?
PB: not really, more complicated than that. 5% extra x0 before the CU to get agreement on tkrtotalhits, for the energy you need sometyhing in between the tkr and the cal in a bizarre way, which is dependent on the angle. any grid geometry we can think of?
AC: on slide 1 you state in reality we had 0 bars for E>20GeV, and 1 bar . why are you trying to determine the pressure if you know that?
PB: it was the easiest way to check extra material along beam line, indeed we know the pressure in the cerenkov

BL: i would have expected cms people to know well what is in the beam line, but they do not seem to be so sure
NM: as you know there are scintillators that you can move in and out of the beam, 0.05x0 is 1 2cm scintillator, so that is reasonable; more than that would be hard to believe
BL: yes, we knew that and did our best to remove whatever crap was along the beamline. we may check now with the beam line coordinator on what extra material was there. also another degree of freedom is the energy absolute value, not guaranteed at the 1-2% level, although the beam energy is reproducible
NM: any beam line logbook available to check history of material along beam line?
PB: used top be available online, i don't know if that is still the case

PB: for each energy and angle we have 3-4 impact points, and I am looking at different pitures right now, we could for instance scan the material between the 2 towers. maybe something between the towers we can hit at 10 and 30 and not at 0 and 20?
BL: maybe a different contribution from direct diode deposition
LSR: whatever it is, at 10 deg there is a factor of 5 in costheta
PB: we do not exactly hit in the same position, so it should not go with 1/costheta if it something we hit at a given angle
AC: how can we hit diode in the center of towers?
PB: i was not speaking about diode hit, i am thinking about material between the 2 trackers
LSR: beam has some dimension, if these were very small they could have an effect

BL: all the effect of beam size and divergenve are included?
PB: yes johan used our best knowledge of that. also i have some finducial cuts

FL: any news from calice people? do they confirm the extra 0.1X0 and where would that be?
PB: apparently this 0.1X0 was the last nb we got from calice
BG: i did not hear from them, the other guy in my lab was aware but did not know exactly the details

Modifying CAL variables for a data-like simulation

LB: started study for selection of HE electrons, and for assessing systematic effects on that case, we put together this machinery for scaling and shift the variables to modify MC simulations so that they reproduce better BT data for some selected electron and hadron runs. the message is that the machinery is in place

LL: discussed this idea many times, this is a first attempt, we will discuss this in our F2F meeting at NRL

PB: had this discussion several times before. there are some things we can change, the best would be to change something in the simulation and get agreement. i would prefer to do that at the digi or recon level so that all final variables are automatically changed. for other, like caltransrms, we will have to change at the tuple level. the problem for me is that we should not forget about a lot of vars we just started to look at

Had to leave the meeting after this discussion LL

BT Meeting 24 october 2007

Unedited notes taken during the meeting, correction and additions welcome LL

Participants: Luca Latronico, Philippe Bruel, Johan Bregeon, Taka Tanaka, Berrie Giebels, Nicola Mazziotta, Monica Brigida, David Paneque, Elliott Bloom, Claudia Monte, Francesco Longo, Bijan Berenji, Piergiorgio Fusco

news

Data reprocess

somehow good and bad news together. Good: the new BTRelease is in the pipeline. we had to face different problems, like python versions to talk to cal db, it took some days to start. Bad: we forgot to change the calibration flag in the JO, which are different for the new BTRelease, so the reprocessed runs that we had produced need to be redone from scratch. i sent an email to the beamlist but it did not work.

NM: I would like to add the Full-Brem run at 30 deg to the golden list, since we always use it
JB: just send the run numbers

Simulation status

can check the link, my slides give a summary
2: we now have a large electron scan with new BTRelease. residual problems on beam spots (shown by nicola last week), which I am fixing.
run 2082 was generated in several different configurations listed here. Good samples to look at, I will show something.
I also did several runs to get a cerenkov pressure scan. IN the last runs only we had the right pressure corresponding to the real data runs, will show something done by Carmelo, and Philippe will do as well
3: FB runs: we have 3 so far, that is why nicola asks for more. Run 1445 simulated in different configurations
4: tagged photons. i do not remember if we talked about this last time, but the important change is that we now have a realistic tagger spectrum which we obtained by changing the current in the MC. hadron runs are missing, I will move to them next
5: some comparisons. Alignment effect on TkR variables for 20GeV e at 0 degree - Leon fix to alignment is effective and there are no extra hits added from the fixed alingment procedure
6: bari digitization effects on tkr variables for run 2082. there is some effect on ToT, but it requires more investigations to decide if we really need that digi algorithm or not
7: cal and tkr vars between two MC, the std GLAST physics MC (black) and the LE physics MC (red); it seems we now have solved the handling issue of LE physics, and see no major differences, only less than 1% in the energy scale. There is some effect on the TKR hits and clusters, we need some investigations here to decide

DP: can you quantify how much time you need to run LE physics wrt GLAST physics? a factor 2-3 or 10?
JB: I think michael kuss quantified that in less than a factor 2
DP: you ran the std MC with std range cuts?
JB: yes
DP: do we plan to run with LE physics always?
JB: we need to identify some runs to check and decide
BG: how does LE physics get rid of HE evts in the tails?
JB: I did not say that, I just said that there is a minor shift of about 1%, whereas before we had a shift toward higher energies but that was a bug which we fix
BG: does this package change interactions of MIPs as well?
JB: no, it is implemented only for electrons

SPS cerenkov pressure scan results

Johan (for Carmelo)

Now that we have a new MC with a corrected cerenkov pressure, we wanted to reproduce several runs with different pressure, and made them.
2: scanned from 0 to 10 bar for run 2082 (non-realistic). interesting region is up to 2 bar where the pressure can be. there is clearly a strong effect in the TKR hits and clusters, almost a linear correlation up to 10 bars
3: cal variables: impact is much less important wrt to tracker variables. the scan 0-2 bar only changed about 100MeV, and data are still very far
4: energy profile for these runs, you see that in the 0-2 bar range the change is minimal; when you go to greater pressure, you start to see the shift of the whole shower and see the shower starting earlier - again data are way off.

BG: what is equivalent X0 fro 10 Bar?
JB: do not know
PB: 1 bar 1.1X0 (see PDG)
BG: so 10 bar would be 1X0 which scales with the plot
PB: right, clearly not possible
DP: slide 2 is essentially saying that different pressure cannot be the reason for the difference data-MC, as the 'right' pressure would be different for hits and clusters
PB: do not agree, TkrTotalHirs from SVAC is nb of hits, while Tkr1CoreHC from merit - you need more variables to say that
JB: the point in checking Tkr1CoreHC is that Bill extensively use this. 1bar is the real value from the logbook

Philippe

2: first looked at CAL energies. we tried long ago to add extra material along beamline, but since the ratio was never above 1 we concluded it was not the right way. Recently the issue was retriggered by Calice people at the lab here that told Berrie they needed 0.1X0 for the H6 line simulation to agree with the data they took in the same period of our run.
BG: they not only told me that, but they also had to set the g4 cut at 1% of active material thickness
DP: you mean distance cut
BG: yes, distance cut had to be 1% of whatever active material the beam crossed
MN: what was the G4 version they use?
BG: i can ask
MN: so extra material should be very far away, since H6 is the same line split very far away
PB: they did not try to explain the origin of this, they just added and it works now. there is a relation between H4 and H6

3-4: layer deposit for layers 0 and 7, difference due to position wrt to shower max
5: tried to minimize discrepancy over all layers using 2 parameters, an overall scale and the pressure
6: for 50 GeV: in each line i plot an angle, each column is a layer. you can see the max of the shower when the dependence with pressure is null. the red point is the pressure coming out of the fit
7: all configurations - best fit would require 2.25 bar and 0.93 scaling factor
8: residuals after applying the scale factor and the pressure. you can see that a solution that is good for 20GeV (left plot) does not work for 100GeV (3rd plot). For 0 deg data tend to be above MC, for 30 deg it is the contrary, so this is incompatible with extra X0 along beam line and extra X0 in CU. I tried with extra X0 in the CU and the fit results in a negative x0
9: residuals for layer vs configuration - rules out a single scale factor
10: tried an energy dependent scale factor, but it does not help.

second link is a plot for tkrtotalhits, the avg value for all configurations (i.e.pressures): the red horizontal line is data, the red vertical is the best fit result and it is 1bar for all configurations and there data and MC macth quite well

JB: i guess an extra x0 very far away from the CU would help, w/o changing so much the tkr hits or clusters - can you verify with the calice people?
NM: cerenkov on h4 are two, so your 2.25 pressure would be about 1 in each cerenkov? btw, 1bar should give pions above thresholdaccording to H4 documentation, so the setting is in principle wrong
JB: you remember we had hard times putting the cerenkov in operation, i recheked the logbook several times, we used 1bar below 50 gev and 0.1 above
NM: were they in the trigger?
JB: should check. 20GeV run is actually quite clean, no hadron contamination

LL: i think we can summarize and say that we have a fair comparison for Tkr cluster variables, and this is due to the new MC and to the modified cerenkov pressure set at 1bar in the new sim. This is in line with the pressure scan test performed and the analysis that both carmelo and philippe showed, and it matched with the logbook record. I think we should quantify the systematic error we make in using the average of the distribution to plot the trend, maybe by taking different data runs in similar confiruations (i mean similar beam spot positions wrt to the silicon wafers) and check the fluctuation of the average. This is similar to using fiducial cuts as Philippe made using the same cuts he uses for the CAL analysis. For the CAL variables we still are far from MC, and philippe demonstrated there is no single scale factor that accomodates the discrepancy for all configurations. We should investigate on extra material along the beam line with a variable slab at variavle distance wrt CU to check. Will modify the sim to do that.
We will double check the cerenkov pressure with benoit

Tkr Cluster comparison

conclusions: we have a deficit in the MC hits, a surplus in MC clusters
PB: which pressure did you use at 280GeV?
JB: above 200GeV we are sure we emptied the cerenkov, so the pressure i applied in the sim was 0.1. also, i do not know what empty the cerenkov means? 0.1? 0.01?
PB: so runs from 10-100GeV were simulated with 1bar, above 200 with 0 bars
JB: yes, the pressure is summarized in the table on confluence
NM: so the drop is due to the cerenkoiv pressure change?

BT-like evts from orbit data

NM: just a reminder that we have an ongoing analysis, trying to select some evts by using data on orbit from similar CU confiugurations. we analyzed all the orbits, at some point we could show this analysis at the ISOC meeting and the CA group
LL: yes, good to pursue this, makes sense

AOB

NM: please preregister to the CA workshop

Simulation status: we have now a stable BTRelease and are producing all the interesting configurations with that. We are following a list of golden runs that people mostly used for analysis and for which we have already tuned the MC and data beam spots. For each run we automatically produce a system test report which allow instant comparison of data and MC, and post it in the runs list. A prototype higher level data-MC agreement matrix was presented and is being improved. The Low-Energy Geant4 simulations which seemed to produce a different energy deposit in the CAL are now back to the standard energy deposit we get with standard Geant4 libraries, after having fixed a bug which Francesco and Johan identified. A thourough material audit was conducted for the CAL, and modifications will have to be made in the detector geometry which will probably add some minimal extra radiation length wrt to the current model.

Analysis: a new iteration of the PSF analysis was preented by Nicola, who improved on the results in the high energy tail (above 1GeV) by aligning the CU and the incoming electron beam. A comparison of the angular dispersion plot with photons and electron was also presented.

Conferences: two contributions were presented and very well received at the ICATPP conference in Como.

LAT September Newsletter - Beam Test contribution

Luca Latronico
September 12 2007

Since the final goal of the beam testa analysis is to validate and improve the LAT simulation-reconstruction package (GlastRelease, GR), we started tracking our progress and planning through a list of deliverables that have flown or will flow into GR.
The tables below reflect the state of the art of our work, including near term plans.
The first one is a list of the achievments that are now in place for the next Service Challenge run, while the second contains studies in progress and for the next months

Topic

Software update

Description

Notes

TKR Digi

TkrDigi v2r6

includes charge sharing and ion signal

two available routines, not enough to recover TKR hit deficit in MC

ACD Digi

GR-v11

better single photo-electron signal simulation by extending Poisson fluctuations to first dynodes amplification

 

CAL Calibration Procedure

column-wise charge injection in CAL CPT online scripts

correct non-linearities in charge injection

improved CAL calibration but did not solve energy shift; will be default calibration mode for the LAT, not relevant for simulation

CalRecon

GR-v11

correct logs and inter-range cross-talk

require mapping of cross-talk for the LAT

Hadronic physics list

GR-v12

 

improved model for hadronic interactions (Bertini model up tp 10GeV, QGSP model up to 20GeV)

TKR material audit

GR-v12

real TKR thin converter thickness

8% lower wrt to original design

List of planned deliverables and expected delivery

Topic

Expected delivery

Description

Notes

TKR material audit

end september

update mass of passive material to real values from measurements

known missing mass in current model mostly around active area

CAL material audit

end september

check and update CAL mass and materials

preliminary surveys indicate good model

TKR alignment in MC

quick fix available in GR-v12

fix bug in MC alignment

checking out alternative alignment strategy

New mass simulation

end september

with latest sim-recon package

will be used to re-evaluate TKR hits deficit and CAL energy shift in simulation

Special TKR geometry simulation

in progress

vacuum layer between silicon layers and tray core

performed to check penetration of delta rays in a more realistic geometry; preliminary results indicate little effect on TKR hits

Low-energy simulations

in progress

systematic test of LE EM physics list in G4 and range cutoff studies

preliminary results indicate no effect of range cutoffs and a non-perfect control of LE physics list in our simulation

background simulations with higher TKR hits

end september

increase artificially number of TKR hits to mimick BT data

will use alignment bug and will check effect on background rejection

background simulation with shifted CAL energies

end september

artificially scale simulated CAL energies and most important CAL variables to mimick BT data

will check effect on background rejection and reconstruction algorithms

final best physics list

november

final MC tuning

should flow into the Service Challenge 1 year run. it will includes best physics list and modification to geometry

A very useful tool that we recently developed to check rapidly the effect of the many changes we are recently testing in the simulation is a BtSystemTest toolkit, which is a collection of the most sensitive plots produced so far by our team that are automatically produced for each MC generation, so that we can quickly evaluate the effect of changes.

Recently some effort went into use of BT data to check the expected behaviour of trigger primitives from the calorimeter with large energy deposit, which are not avaible with standard cosmic ray data from the LAT.

BT_5sep_notes

BT Meeting, 6 september 2007, Notes

Unedited notes taken during the meeting, comments and additions are welcome LL

Participants: Luca Latronico (LL), Leon S Rochester (LSR), Benoit Lott (BL), Takkaki Tanaka (TT), Markus Ackermann (MA), Philippe Bruel (PB), Hiro Tajima (HT), Michael Kuss (MK), Jan Conrad (JC), Tomi Ylinen (TY), Elliott Bloom (EB), Berrie Giebels (BG), Eduardo do Couto e Silva (EDC), Gary Godfrey (GG), Ping Wang (PW), Bill Atwood (BA), MarioNicola Mazziotta (MNM), Piergiorgio Fusco (PF)

News

  • Updated BT deliverables list
    LL: presented this to the IFC on monday that well received it and was impressed by the team work. It is a useful guide for us too in order to have future spelled out
  • BTRelease status - Michael Kuss
    Old sims from Francesco harder to compare as not run on boer. New BTRElease running a factor 3 slower, 5 times more memory, but we can live with that on the pipeline. We will keep on looking for the memory leak, but in the meantime the plan is to start generating full statistics runs.
    LL: next run to generate is the special run with modified TKR geometry with a vacuum layer between the SSD and the tray facesheet to test secondary delta rays propagation

CAL Calibration issues - Elliott

This is work done by Ping and myself, she is trying to calculate de/dx code and comparing to geant.
3: Weaver-Westphal (WW) is probably best code for de/dx available, talked to Weaver few times as he is in LBL. here are some results from his code
4: fred piron et al wrote a note some time ago comparing G4 with other MC and PDG and Bethe-Block (BB). he found that g4 values did not agree very well with calculations. there are in fact some differences in what is calculated: the mean energy deposit is calculated, in the case of geant, while what is calculated in WW, PDG and BB is the energy loss, expected to be higher. nevertheless there seem to be issues from geant.
5: very good agreement between ground data and MC for LAT for CAL_MIP_Ratio
6: the factor used for calculating CAL_MIP_Ratio seem to be there to match data and MC; in fact the 15% difference is wrt WW, which is energy loss (higher) and not deposition (lower)
7: ping figured this out (cuts and geometry) with help from leon and tracy, now we have another person able to do this
LSR: 20.2 is probably mm, not cm (typo)
EB: yes, got it from an email, you provided that so
LSR: number is right, if you got it from the code, units might not be
PB (from message board): it is not a typo : center of tower 10 : 374.5/2+27.84/2 = 201.17mm
8: energy deposition and energy loss plots for a 1GeV muon
9: same plot as 4 with direct CAL-only gleam simulation for energy loss added. again some issues with calculated values
10: conclusions

LL: it seems to me that the only issue is with different versions of g4 (g4v6 vs g3 and g4v5 as in slide 9 and 4) as the difference wrt calculated values is explained by the difference between energy loss and energy deposition. Did you also say that error bars are smaller than they should?
EB: error bars are correct, but all data points look different, and the most different is the last one

BL: the note was not written by piron, but by thierry reposour, you might want to aknowledge the right person. the purpose was more focused on comparison on predictions for carbon. the CsI slab thickness that was used by thierry was very thin (1mm i recall). the contribution of deposition is fairly low, as many e actually escape the slab, go out and do not release energy therein. so there is good reason for a higher energy deposition that you find for a much thicker (1.85cm) slab as you have used here

LL: which version are we using in BTRelease or GlastRelease?
LSR: v8
LL, PB: yes
EB: we are using g4v6r29p5
MK: well numbers can be different, v6r29p5 is our numbering convention related to our g4 external library version, nothing to do with actual g4 version
EB: how do we find corresponding version?
LSR: probably picking g4v8

HT:very basic question - when you say mean de/dx do you mean peak or average value from the distribution?
EB: slide 8, both calculated, we compare the same quantities in every case
HT: when you calculate mean energy loss, do you include all energy band, i.e. not just what is in the histogram which may be cut?
EB: all you see is on the plot and there are no other data points

BA: wrt calculation of CalMipRatio, since the code is ultimately from me, the energy factor was simply determined to make the peak with muon runs to make 1. nothing fancy there. calcsirln is derived by tracing tracks to the cal and making equivalent rl path
EB: thanks, very useful

GSI analysis - Eduardo

This is preliminary work, I got some feedback from Eric Grove, but I could not include it as i was away
2: motivation for this study: getting ready for leo and have limited tests for timing for cal triggers
gsi data are natural place to look at. hope this analysis will evolve into LEO analysis. hope to extend to other ion species. The main question is whether cal-le and cal-he behave as expected with C ions
3: so far analised 1 run only.
4: basic distributions, no cuts, will make simple selections. will look at most populated trigger types (22 and 30)
5: middle plot is maxene in cal, bottom plot is the question, i.e. arrival time difference between cal-lo and cal-hi. this is expected to be negavitve, cal le cannot arrive before cal he
PB: blue is 22 (cal he on), red is 30 (cal he off), so do not understand middle plot
EDC: that is the max energy in a crystal for cal-le and cal-he triggered events
BL: it seems that blue and red are swapped
EDC: could be, will check offline (later confirms that these are swapped)
6: time arrival difference for different gem cuts
7: first column is all evts, middle column is evts with arrival time difference smaller than 10, right column is time difference for positive and high time arrival difference; middle row is condition arrival time for cal-lo, bottom row for cal-hi.
8: same color code, second and bottom row are scatter plots of calmaxene vs condition arrival time
EB: what do you know about a priori timing between those guys?
EDC: inferring they are not different, but we do not know, will comment later
8: to get time just multiply a tick by 50 nsec
9: the shoulder at 5-10 ticks is from direct energy deposition in the diode, usually 5-10 ticks earlier than peak deposit. these are expected and were measured by martin during IT, but could not do it with cal-hi as we had too few events.
10: test this idea by looking at signal in nearby cal modeuls
11: twr 1 and twr3 cal-he triggers requiring either cal-le, cal-he, both. please note once again the difference in x and y axis. the point is that there is abundance of triggers in adjacent towers, so it should not be diode deposition
12: same as 11 but for cal-le
13: event display from evts with cal-he arriving before cal-he
15: another possible explanation - could it be the jitter (slide 9)? a jitter would be between 2 and 4 ticks, but it extends up to 10-12 ticks, so must be something else

eric argues that cal-lo should never be allowed to open the trg window ...

LSR: could it be a skewing effect?
EDC: will check that, will talk offline

LL: i recommend you look for clean evts, maybe cutting on tkr variables, as the instantenous rate at GSI was much higher wrt average, in particular at the beginning of the spill where the machine was delivering beam from other users and collimators were closing into our line

BL: i am not sure C events are the best for this study, as the average non-interacting energy deposit for C ions is below cal-hi threshold. did you consider looking into cern events? we should have more handles there, and we have external trigger too, so potentially you could estimate timing wrt the external trigger
EDC: right, looking at cern data too, this just came earlier

BA: slide 8 middle plot is definitely strange as you sais: the energies are all>1GeV in a xtal, nothing to do with cal-lo events
EDC: must go back and check, I might have swapped labels for cal-le and cal-he

BL: is there a schematics of the trigger timing?
EDC: yes, I will provide that

LE simulations udpate

LL: francesco and carmelo are investigating the LE simulations presented last week, indeed there is a problem in the energy release for TKR and ACD, where recent LE simulations deliver twice the energy wrt standard glast physics. Will update as soon as we know more on this.

AOB

BA: what is the status of understanding tkr multiplicities? there is a variable that we use very much in background rejection it is called tkr1corehc and it very important that we have a good MC for that
LL: we will reevaluate TKR hit discrepancy, including that of tkr1corehc, once we generate simulations with the new BTRelease, and we are about to start that. we will perform also a dedicated background simulation run with extra hits in the simulation using the alignment bug in the simulation discovered and fixed by leon that seemed to provide more hits. we are also preparing a run with a modified tkr geometry (vacuum layer between SSD and tray facesheet) to test secondary delta rays propagation with a more realistic geometry (glue dots instead of an average density layer of glue)