Blog from April, 2008

Unedited notes, corrections and additions are welcome, Luca (LL)

News

LPM effect
LSR: LPM effect exists and we should see it, turning it off should not be the solution.
thin target effect, again largest for high Z material, also suppresses brem, we should not consider this solved until we get a coorect implementation
JB: you are right, I never said the problem is solved, we just know the current implementaiton is not correct and the G4 developers advised us to do so. we already saw we need some LPM effect, starting from 100GeV or so. will have to work with our data and the g4 team to implement this correctly
BG: turning LPM off is like having an upper limit, right?
JB: yes
BG: whatever we get when we switch it on will be less than what we have with LPM off?
JB: somerhing that philippe brought up some times, is the connection between hits agreement and extra material

Temperature correction for CAL pedestal
see the posted summary from Sasha
BG: any chance to get pedestal correction for some proton runs where I saw drift of MIP peak?
AC: you can apply to any run and the pedestal will be correct,except that I do not correct for rate effect
BG: took care of that, picked the low rate runs and cut on GemDeltaEvtTime
PB: the idea is to get a stable BT release, and then reprocess all the data

LL: Johan and Francesco will take care of the reprocessing?
JB: yes, we can do it on the pipeline or locally
LL: either way, we should advertise the reprocessed data to have more eyes on those

AC: I just noticed some strange things related to T measurement, namely time; it looks like time in start-time column in run db not always match time in logbook. a 9 hours shift appear and disappeared afterwards, around august 9

JB: what happened I think is that we changed the time of the BTserver, where the HKP lookgbook was residing. we changed it from SLAC time to european time at users request.

AC: another thing we noticed, there was some strange change in T, like an abrupt step up and down of about 1 deg, definitely not real cooling and heating, happened in 5 minutes, so it is too fast, it seems to be correlated with power up and down. it seems that even if CU is powered down, temperature is still measured

CS: T monitor are on the cables, and can be readout with FE off
AC: but it changes immediately, it cannot be heating from the FE
CS: yes, we observed that during TVAC tests, there is some sort of xtalk that changes the T value when switching on or off the FE

PB: what are the 4 columns in the T file?
AC: 4 sensors
PB: 5 sep gives no data for two towers
AC: looks like tower 2 or 3 gave no T for that day
JB: for some time we ran with some special config with just 1 TEM on, I think that was a CPT
AC: anyway it seems that for all periods when we took data we have T available

Data reprocessing with EM hypothesis

2: black is behind red
3: the only differences are in the Kalman filter related variables
4: some more plots
5: conclusions. we should use the same hypothesis in data and MC

PB: we should use the EM hypothesis for gamma and electron runs, not for hadron runs
CS: right, i think we have the same discrepancy (MC flag and data flag) for the electron runs, did not check the hadron runs
PB: we need to have the same hypothesis for MC and data
JB: we always used MIP for data and EM for MC
LSR: just to remind people of what this is, obviously for MIP hypothesis there is a dE/dx calculation mainly coming in for the kalman energy estimate; for the EM hypothesis, we assume the energy deposition goes down along the layers by an exponential in rad. lengths. So for the same track, the kalman energy will be higher for the e-radloss hypothesis. This affects the fit itself which uses this info. to partition the energy between the vertex tracks. so you should see some 2nd order effect from that

News

Sasha (AC): just a comment on those slides, the CsI density of 4.51 is correct, as we have found out later, so do not change that. Aous is looking at T for those runs using pedestal drift, and he got rather high T, around 29-30, so real T would be really interesting to look at, if T is that high there will be some effect. At the end of SPS we had similar T, so if we made muon calib at the same T, light yeld would be the same, while since the PS runs were mostly taken with lower T (22), we would see differences for those runs, although in the wrong direction, i.e. we should have lower signal. So it will not compensate for all our discrepancy, and possibly go in the other direction, so it is important that we get this information

Low E simulations - Carmelo (CS)

2: these runs were simulated with both std and LE physics. we observe no big effect below 10GeV, as expected, but we find a surprising increase in number if hits which gets closer to the data as energy increases. I have put all reports here http://www.pi.infn.it/~sgro/reports_EleLowEnergy_16_4_08.zip

3: left is tower not hit by the beam, right is tower hit by the beam; top is cluster, bottom is hits. interesting to notice that red curve for cluster matches better data (black)

4: plotted the ratio and the hit and cluster profile, again 5GeV gives no big change, 50 gives better agreement for LE physics

5: some cluster variables from the merit, same behaviour

6: some CAL variables show no difference between std and LE physics

Leon (LSR): very interesting, in physics lists that emphasize LE behaviour, it might make sense to test with models of the detector that have shorter range cuts and more fine granularity in the geometry
CS: we have used std cuts
Luca (LL): what do you mean by finer geometry?
LSR: we made tests with realistic honeycomb geometry and glue dots which showed no changes, it would be interesting to test LE physics with such models
CS: the honeycomb tests were done with 1 and 10GeV, away from these energy, would be good to redo those tests

Philippe (PB): we know, at least for energy discrepancy, that we will need to add some material in front of the CU, I showed some time ago that if you add 10%X0 the cluster nb gets into agreement with data, so at least for 50GeV, where you have a perfect matching, you will loose agreement with extra material - so we need to keep this in mind (see https://confluence.slac.stanford.edu/download/attachments/13893/beamtestmeeting_20071107.pdf?version=1)

Johan (JB): i do not remember that, but at some point there was some plot suggesting that at SPS we might have too much mnaterial in the beam line, which was provoking some spread of the beam with wrong divergence, so we do not know yet if and how much material we have to add for the SPS simualtion. For cuts, we use 10um, as usual, and showed many times that going below this does not make any change; i am currently in touch wiht Vladimir Ivantchenko at CERN, and he is suggegsting some test with configurable parameters to understand where these changes come from. I plan to make these test in the CU standalone tower simulation, at that point it will make sense to test different geometries as Leon suggest

David (DP): I had found no difference when changing production cuts, although I did not check with LE phsyics (see here https://confluence.slac.stanford.edu/download/attachments/4096462/Check_ProdTh_Geant4_2008_02_06.pdf?version=1). Did you check what happens with runs with energy>50GeV?
CS: yes, you can find plots for all those runs in the link I pasted in the chat window, I just selected two representative runs
DP: is the agreement that you find good at high energy? is there a different behaviour?
CS: agreement is good at any energy>20GeV; there is something strange in the sim at the highest energy, slightly worse than 50GeV, but anyway better than std physics

Only suggestions and discussions items are in these notes, I did not take notes in realtime

New runs with realistic TKR noise occupancy (10 times lower than earlier sims)

  • old simulations give more hits than data, contrary to previous analysis from nicola's - check cuts and software versions
  • compare with leon studies on threshold changes - he did not do systematic studies with different occupancies but only changed strip threshold

Update on CalLkHdEnergy - Yvonne

PB: you should use CTBCORE>0.1, we always use that cut in the analysis to ensure reasonable reconstruction
BG: distributions are asymmetric, you should use the peak instead of the average value
EB: add some delta beam for GR and check if we see the same effect
PB: no reason to expect CTBBEstEnergy to behave the same for CU and LAT, for example slide 17 would not look the same for the LAT
PB: we should probably limit the CT selection close to the boundary of CalLkHdEnergy definition
EB: for IRFs, pay attention that CTB smears selection, very important for DM. Maybe reconsider the strategy for determining the best energy?

Temperature effects on light yeld - Sasha

PB: is temperature effect correction in the code? do we plan to make it before launch?
AC: T should be fairly stable in orbit, no need not to correct, but I can put this correction in for CU analysis within a week for the light yeld; I need to know the temperature for the runs taken in Pisa during muon gain calibration
LL: will make sure you get them, sorry that you do not have them yet

BT paper discussion - Summary

We will write a note for internal use of the collaboration and reconsider publishing a paper from that note depending on results; we will include latest changes from T correction but most likely not the new CU calibration, which will likely happen after launch.
Luca and Philippe will do the drafting and ask people to contribute and proof-read it. Elliott volunteer to proofread the draft. This should not change our focus on constraining uncertainties and improving data-MC agreement and analysis in general, even so if we decide to publish a short paper out of the note

Steps towards BT paper

With approaching of launch, we should consider our strategy for publication of BT results before all our efforts are dispersed.
I believe we should identify the path towards a new assessment of the CAL discrepancies (energy scale, caltransrms, longitudinal position measurement) - below is a first list to discuss:

  • analysis with updated pedestal files - Sasha
  • new CU calibration - SLAC
  • reprocess of key runs using the above udpates - Johan, Franz
  • re-evaluation of discrepancies - all

When this is done, depending on the residual discrepancies, we should consider the following options:

  1. residual discrepancies at few% level: we could publish a short paper with the current status and the basic data-MC comparison plots; in order to minimize the effort, I would discuss this only and refer to the GLAST symposium paper for description of the setup and dataset. Pro is that we release the pressure on us for releasing our analysis to the collaboration, we have a paper for reference that ensure that our performance parameterization based on MC is grossly under control; con is that we would presumably prefer to publish better results after such a big effort and we may not find the energy to finalize the analysis tu sub-% level
  2. no significant improvement in the agreement: continue analyzing and delay publication until we have a good agreement. Pro is that we look for a final solution, con is that we have no clear idea of when this is possible and we will have less and less time/people to work on this
  3. no significant improvement in the agreement: publish a short note where we honestly state the status of our analysis (8-10% discrepancy), conclude that we are currently dominated by uncertainties in the CU calibration and are working on that. This would require at least, in my opinion, that we prepare two very clear and well motivated statements: i) explain why we believe the CAL calibration on orbit will be better than the CU calibration with a beam and ii) provide some initial indication that the discrepancies are not so critical for bkg rejection through the study of stretched variables datasets

We should discuss this during the meeting and provide feedback to the publication board on what the group thinks.