CSC Calibration Workflow
This page documents the work to be done in the first few months of 2009.
The following gross areas define the work flow:
- test pulsing programs & production of constants
- validation of constants and production of payload & loading into data bases
- validation of calibration with resolution studies
- improvement of offline reconstruction and simulation
In my estimation,
- test pulsing is fine
- validation of constants, etc. requires work
- resolution studies are weak, even non-existent
- offline reconstruction needs to be updated and improved
We have O(6 months) to finish everything.
Work Flow
- test pulsing programs & production of constants
- question: is there any more work to be done on the test pulsing programs? [expect no]
- question: are there any more issues with the calculation of constants? [expect no]
- to do: what are the differences w.r.t. the old constants from the MTCC era?
- to do: ensure sufficient accuracy in ascii files
- to do: make sure results are backed up and allow for retrieval in case of problems
- to do: this is a good time to write a thorough document on this part.
- validation of constants and production of payload & loading into data bases
- to do: what does Oana do with the input files from Victor? What are the manipulations?
- to do: what values do we expect? This requires significant homework.
- to do: technically speaking, how should we ensure that the values are valid? Compare to previous set? Set a priori windows?
- to do: technically speaking, how do we "correct" random values coming from Victor? Insert an arbitrary value? The previous value? This may depend on which constant we are considering.
- to do: what kind of tracking of constants to we have in hand, and what do we need?
- question: once an approved set of constants has been produced, probably through selective modification of Victor's input, what are the additional steps and procedure to produce a candidate payload?
- question: how does that payload get loaded into the data base?
- question: what checks can we do to verify the payload after it has been put into the data base?
- validation of calibration with resolution studies
- to do: define two methods for assessing the resolution and establish the design goal.
- question/to do: what is the impact of each of our calibration constants on the point resolution?
- to do: finish dedicated program for measuring the resolution
- to do: establish benchmark tests (data and MC)
- question: how far are we from our goal?
- improvement of offline reconstruction and simulation
- to do: document what we have now. Stoyan has begun this process.
- to do: find out what Stan was aiming for. Are there important differences?
- question: why don't we use the noise matrix?
- question: are we using the cross-talk information in the right way?
- question: how good is the hit simulation?
Planned Calibration Payloads
We foresee four payloads:
-
CSCBadChambers_none_KillAllME42
kill all chambers in ME4/2, no other chambers
-
CSCBadChambers_none_KillNoME42
kill no chambers in ME4/2, nor no other chambers
-
CSCBadChambers_none_FiveLiveME42
kill all chambers in ME4/2 except for the five chambers we expect to install this spring
-
CSCBadChambers_CRAFT_KillAllME42
kill all chambers in ME4/2, and chambers in the rest of the detector as observed during CRAFT
Map from the existing payloads to these new names:
old (current) name |
new name |
no_ME42 |
CSCBadChambers_none_KillAllME42 |
empty_mc |
CSCBadChambers_none_KillNoME42 |
start_offline |
CSCBadChambers_none_FiveLiveME42 |
does not exist |
CSCBadChambers_CRAFT_KillAllME42 |
--
MichaelSchmitt - 21 Jan 2009