When the LHC beams cross in ATLAS, each crossing at design luminosity will have on average over 20 hadronic interactions. They are mostly low-p+T+ and "uninteresting" events. However, they will necessarily be superimposed on the high-pT and "interesting" events which we normally simulate. While these background events could in principle be simulated, it is far simpler to use real data with a zero-bias trigger. It naturally includes all background sources in the right ratio. It can track changes in these backgrounds. Overlay refers to the mechanism of adding these real-data background events to the high-pT one in order to simulate what we actually observe.

Outline of overlay

The first step is to acquire luminosity weighted zero-bias events. Select a trigger which (1) scales with luminosity, (2) has high enough rate so it is statistically precise, and (3) is robust against changing beam conditions. When the trigger is satisfied, wait one revolution of the LHC beams, and record the next crossing of the same bunches, whether or not any trigger is satisfied on this subsequent crossing.

When we need N simulated events for a certain time period, select N zero-bias events recorded during this time and look up the detector conditions at those N discrete times. These conditions could be the same for some period; they could also be different for every event. We then generate and simulate signal events according to these conditions, whatever those conditions might be.

After the signal events have been simulated, add in the hits recorded in the corresponding zero-bias event to give the complete signal plus background sample. This step is referred to as overlay.

Overlay Validation

It is important that the overlay process reflects what the real detector would have done. This is relatively simple when dealing with a sparsely populated detector – there are no particle overlaps. It can be more difficult when the occupancy is high. For example, the ATLAS pixel detector has ~80 10^6^ elements. Its average occupancy is low. However, the local occupancy, e.g. in the core of a jet, can be high. Failure to overlay properly inside a jet can impact physics performance such as b-tagging.

Alignment Challenge

Ideally, we would like to use the real conditions and alignment of the detector for the entire chain: generate an event at the measured beam spot, find hits in the detector in its real position, etc. If we use an alignment for the signal Monte Carlo that is different from the real detector, it is problematic at reconstruction time. If we use the real alignment, Monte Carlo track hits may not be lined up, and tracking efficiency would be low. If we use the Monte Carlo alignment, the signal event would perform as expected but the background tracks become inefficient.

While the preference is to use real alignment throughout, it is known that the real alignment constants have (artificial) overlaps between detector elements. We have not seen any change to physics quantities, but there is no way to rule out the possibility.

One major challenge is to understand the use of alignment constants. For example, can we remove these (artificial) overlaps? If not, can we be sure that physics is not affected?

Technical Demonstration

We also need to demonstrate that the entire chain is functional, and all the tool are in place. For example,

  • How do we select N zero-bias events from the entire collection? How to do it when the pre-scale factors changed during data acquisition?
  • Can we overlay the same zero-bias event that also defined simulation conditions?

This testing started recently.

Contacts:

  • No labels