Plots for the ARC of HCAL CRAFT paper CFT-09-018
In response to specific comments received on Sat. Sept. 26, 2009. The comments and line numbers refer to version v5 of the paper specifically.
Comment #55: "Equation 2: Sasha has a question:"
...related to the correctness of Equation 2.
where
The key is that
(shifting the pulse earlier in time, or "to the left"),
, the TDC bit resolution - i.e., you're integrating the same pulse shifted by a phase. So adjust the integration limits:
You're left with two terms representing two pieces of the pulse at the beginning and end of the interval. Prior to the start of the pulse you define
to be 0, so for the first sample with a pulse just entering in from the right the second term is 0 and you only have the first term. This is the piece represented in the document, modulo a variable substitution to get the range right. After 32 phases (25ns-worth), you have to start subtracting out the piece of the pulse that left that 25ns bin (the second term above), but that's a detail the paper probably doesn't need.
Comment #62: "Line 115: Since in a sense this is the basic result, a figure would really be nice….
Response: Here is the plot that has been traditionally associated with the 1.2 ns resolution result. It
comes from a position scan of an HB wedge over 64 cells (4phix16eta) with a 150 GeV pion beam.
- Fundamental resolution of HCAL barrel/endcap for 150 GeV pions:
Comment #113: Fig 10: Don’t understand how you get Filtered/Unfiltered ratios >1
Response: The key to understanding this whole method is to understand that the total number of events in the MET distribution is "conserved". The method does not throw out events, it only throws out calorimetric "hits" contributing to the MET calculation for that event, if they are "out-of-time."
So when the filter efficiency is as high as it is, you start to see effects that combine with the fluctuations from low statistics (see response to comment #116 below). Eliminating one rechit on the basis of time can reduce or increase MET in a particular bin. The following plots zoom in on a region that exhibits such a fluctuation.
- Zoomed in to QCD sample to see fluctuations:
Comment #116: "Fig 10: better binning of all plots. The middle right plot has discontinuity at 500 & 700 GeV which does NOT appear to be a statistical fluctuations. Is it a normalization problem?"
Response: Assume “all” refers to all plots within the figure. Assume “better” means “smaller number of”. The discontinuities are a result of statistical fluctuations and not mis-normalization. Recall that the middle plots are generated by integrating “to the right” of a threshold that is itself swept from left to right in the top plots. This smooths out the statistical fluctuations seen there. As the threshold approaches a high fluctuation (while the distribution is dropping), the fluctuation gains more prominence in the integral until it dominates. Once the threshold passes the fluctuation the integral drops drastically.
Here are plots of the individual bins stacked together and plotted separately (the samples were all normalized to
). You can see that the source of the spikes are individual events from each sample.
- Summer08 QCD MET distribution - Pt bins stacked together:
- Summer08 QCD MET distribution - Pt bins plotted separately:
A potential solution was arrived at during the ARC review. As the QCD Pt bins are "hadd-ed" together, weight the samples by the statistical error in each bin. So as an example drawn from the problematic dataset above, zoom in to the Pt=170 GeV sample (which has two high MET fluctuations around 500 GeV) and the Pt=470 GeV sample (which contributes the most statistics around 500 GeV):
To simplify, round the numbers as follows:
- Pt=170 GeV sample has event scaled by a factor of events
- Pt=470 GeV sample has events scaled by a factor of
Form the weighted average using 1/(error)^2 as the weights:
which gives the desired result of being heavily weighted toward the high-statistics sample.
UPDATE: Turns out the above method does
not work. The following equations show why:
The scale factor
ranges from
for
GeV to
for
GeV. the highest
bin also has the smallest number of events, so that the sums are dominated by this, the lowest probability sample.
We next tried a software tool called "KEYS" - "Kernel Estimating Your Shapes" an adaptive kernel-estimating smoother. Here are the results:
- attempt to smooth QCD using KEYS adaptive kernel estimator:
--
PhilDudero - 2009-09-28
Latex rendering error!! dvi file was not created.