VertexLuminosityCalibration

Vertex-based luminosity measurement

Code

The code is based around RootCore, "a package that helps developers build packages that work standalone (outside Athena)." RootCore provides some basic package management tools, and helps integrate the packages needed for the vertex-based luminosity measurement.

Start from a clean test area. Whatever recent release should be good (best if >= 17.2), unless otherwise noted.
Checkout code from SVN repository. First checkout the VdM package:

kinit username@CERN.CH (if you don't want to type your password all the time!)
svn co $SVNINST/Institutes/LBNL/Luminosity/VdM/trunk VdM
Then call the checkout.sh and configure.sh scripts to check out and compile the other packages needed.
source VdM/scripts/checkout.sh
source VdM/scripts/configure.sh

Remarks:

  • Everytime you setup your code, remember to perform from your test area:
asetup 17.2.2,here (if athena is not already setup)

After the first time you've checked out the code, you'll instead want to simply make sure your version is up to date. To get the latest version of all the packages (and compile them if anything's changed) do:

source VdM/scripts/svn_all_packages.sh update
source VdM/scripts/configure.sh

Now you can setup the code

source RootCore/scripts/setup.sh

  • We usually assume that the environment variable $TestArea is set to the path of your test area.

  • To use the Vertex-Unfolding method (see below) you need to have USE_UNFOLD symbol defined in VdM/cmt/Makefile.RootCore. This is on by default but can be switched off (not defining it), if you have problems in setting up the unfolding code.

Analysis flow

We employ two main methods for the luminosity determination, plus a back-up one:
  • Vertex-Counting: based on counting number of vertices reconstructed (VtxC)
  • Event-Counting: based on counting the number of events with at least 1 reconstructed vertex (EvC)
  • Vertex-Unfolding: based on unfolding the per-event distribution of the number of reconstructed vertices (back-up, not fully supported) (Unf)

The analysis flow is somewhat similar for all three algorithms with some exceptions. We outline below the general flow for the three algorithms and refer to the specif sections below for a detailed explanation on how to perform each step.

  • Derive pileup corrections from Monte Carlo
    • Produce D3PD for reference MC sample (if not available already)
    • Run over D3PD to produce raw histograms and vertex counts.
    • For VtxC_:
      • Produce D3PD for reference low pile-up sample for masking correction (if not available already)
      • Derive basic probability density function for masking correction
      • Calculate fake rate correction from reference MC sample.
    • For EvtC_: *Calculate fake rate correction from reference MC sample
    • For UnfC_:
      • (only for UnfC) Produce Transformation Matrix from reference MC sample
  • Calibration of the method
    • Produce D3PD for Van der Meer scan (if not available already)
    • Run over D3PD to produce raw histogram and counts
    • Run Van der Meer analysis and calculate visible cross sections
  • Producing luminosity results
    • Produce D3PD for the desired physics run
    • Run over D3PD to produce raw histogram and counts
    • Run LumiVtx analysis for normal physics run (or mu-scan, if this is the case)

Quick Runthrough

You can follow this tutorial to do a quick runthrough of the code.

Data and MC samples

We divide samples depending on the tracking settings used in the reconstruction stage. We currently have 2 main settings:
  • Default (a.k.a. 17.2-normal): Uses standard 17.2 tracking and vertexing setup for 2012 data
  • VtxLumi (a.k.a. 17.2-VtxLumi): Uses custom tracking setup (the one used in 2012 for PixelBeam stream reconstruction)

Samples for VtxLumi settings
AOD dataset D3PD dataset Comment
Data samples: 7 TeV calibration/test
user.beate.trkReco_VtxLumi.data11_7TeV.00182013.calibration_VdM.daq.RAW.v01_EXT1/ user.spagan.user.beate.trkReco_VtxLumi.data11_7TeV.00182013.calibration_VdM.daq.VTXD3PD.v01_EXT0.v1.0/ vdM May 2011 reprocessed 17.2-doVtxLumi. ESD available, not AOD
user.beate.trkReco_VtxLumi.data11_7TeV.00188951.calibration_VdM.daq.RAW.v01_EXT0/ user.spagan.user.beate.trkReco_VtxLumi.data11_7TeV.00188951.calibration_VdM.daq.VTXD3PD.v01_EXT0.v1.0/ muScan 2011 reprocessed 17.2-doVtxLumi, ESD available, not AOD
Data samples: 8 TeV calibration/test
user.beate.trkReco_VtxLumi.data12_8TeV.00201351.calibration_VdM.daq.VTXD3PD.v14_EXT0/ user.spagan.user.beate.trkReco_VtxLumi.data12_8TeV.00201351.calibration_VdM.daq.VTXD3PD.v14_EXT0.v1.0/ vdM April 2012 17.2-doVtxLumi.
Calibration/Test samples 8 TeV
data12*PixelBeam*AOD* data12*PixelBeam*NTUP_IDVTXLUMI* automatic PixelBeam stream ntuples (AOD available for first runs)
MC samples: 7 TeV private samples on PDSF (still need to dq2-put them)
    MinBias, Pythia6 AMBT2
    MinBias, Pythia8 A2M
    MinBias, Pythia8 A2M, DL
MC samples: 8 TeV official sample
valid1.107499.singlepart_empty.recon.AOD.e603_s1469_r3560/ user.beate.trkReco_VtxLumi.valid1.107499.singlepart_empty.recon.AOD.e603_s1469_r3560.v6_D3PD/ MinBias, Pythia8 A2M (default for fake rate estimation)

D3PD production

We have various kind of D3PD that can be produced, with different level of granularity. They are all produced with Reco_trf.py transform (unless you want to re-run the vertex algorithm with particularly different settings). I will describe below the standard one which has a lot of information.

You need atlas release 17.2.1.4 (recommended) or greater. You also need to check-out TrackD3PDMaker-01-01-16-07 or greater (or TrackD3PDMaker-01-03-05 or greater, but this is not compatible with 17.2 release, it will be with the next major release).

To produce D3PD from AOD or ESD datasets, you can then use Reco_trf.py as follows:

# Reco_trf.py command for producing Vertex Luminosity D3PD (detailed information).
#  Dumps AOD vertex container, runs non beamspot constrained vertices
#   Filter tight truth info (only stable charged particles of in-time pile-up are kept)
#  Input/output files just for autoconfiguration
#  maxEvents is set by submitPAthena.py
Reco_trf.py maxEvents=__MAXEVENTS__ autoConfiguration=everything preExec='InDetFlags.doVertexFindingForMonitoring.set_Value_and_Lock(True);InDetFlags.doSplitVertexFindingForMonitoring.set_Value_and_Lock(False);InDetFlags.useBeamConstraint.set_Value_and_Lock(False);from TrackD3PDMaker.VertexD3PDAnalysisFlags import VertexD3PDAnalysisFlags;VertexD3PDAnalysisFlags.useAllVertexCollections.set_Value_and_Lock(True);VertexD3PDAnalysisFlags.useBeamspot.set_Value_and_Lock(True);VertexD3PDAnalysisFlags.useTracks.set_Value_and_Lock(True);VertexD3PDAnalysisFlags.filterTightTruth.set_Value_and_Lock(True)' inputAODFile=%IN outputNTUP_IDVTXLUMIFile=%OUT.IDVTXD3PD.root
where
  • MAXEVENTS is the max number of events to run on (-1 = all)
  • %IN is your input file name (AOD in the example, use inputESDFile for running on ESD)
  • %OUT is your output prefix, so that the output name will be %OUT.IDVTXD3PD.root

Deriving Pileup Corrections

Deriving the pileup corrections generally has two steps. First, you need to process the D3PD for the appropriate Monte Carlo sample. Second, the executables initializeFakeCorrection.cxx and initializePuCorr.cxx create a bunch of TGraphs and TH1Ds, which other pieces of the analysis can use to calculate the pileup corrections quickly.

The corrections are specific to a given set of settings; a unique set of pileup corrections should be derived for each energy (7 or 8 TeV) and reconstruction setting (17.2-normal or 17.2-VtxLumi). Additionally, for Monte Carlo-driven corrections, you can try varying the simulation beamspot length (we've used 45mm, 55mm, and 65mm) or the generator (pythia 6 or pythia 8, various tunes). In each case, you need to give the set of corrections a unique name, for example mc_8TeV_17.2_VtxLumi_pythia8_pu or data_7TeV_17.2_normal. This name is used by the rest of the code to look up the pileup corrections.

Masking correction

  • You need a D3PD corresponding to a data or Monte Carlo sample with low pileup, with settings matching the data for your luminosity measurements. Practically speaking, if you want to correct masking in Monte Carlo (as needed above, for the fake correction), use the same Monte Carlo sample to derive the masking correction. If you want to correct masking in data, use either the sidebands of the 2011 May vdM scan (7 TeV) or the 2012 low-pileup run 200805 (8 TeV).

    • If running over Monte Carlo, use TrackingAnaMC.cxx, which belongs to the D3PDMC package.
                TrackingAnaMC -b -q -a -o $LUMINOSITY_DATA_DIR/VtxTruthMatchResults/<MC sample name>/InDetTrackD3PD_results <input root files>
            
      • This can take a long, long time. So, you should consider (1) running a batch job, rather than on interactive, and (2) splitting up the job into subjobs (i.e. only do 4 D3PD files per job), and hadd the results. Check out D3PDMC/scripts/runAllTrackingAnaParallel.py for an example with the SGE batch system.
    • If running over data, use TrackingAnaData.cxx, which belongs to the D3PDData package. Make sure the flag INPUT_DATA is set in D3PDData/cmt/Makefile.RootCore, rather than INPUT_MC!
                TrackingAnaData -b -q -a -o $LUMINOSITY_DATA_DIR/VertexCounts/<run>/<settings>/InDetTrackD3PD_results <input root files>
            
      • This generally goes faster than TrackingAnaMC, since no truth matching is performed.

  • Once you have processed the D3PD and obtained the InDetTrackD3PD_results.root file, the masking correction is initialized using:
             initializePuCorr -t <sample name>
          
    • Note that is currently hard-coded into initializePuCorr.cxx. See the code, and also GlobalSettings.cxx, for specific info on how the paths work.
    • initializePuCorr makes the masking PDFs, i.e. the probability of masking/merging a pair of vertices as a function of #Delta z. The other pieces of the analysis will use these masking PDFs to derive a masking correction specific to their beamspot lengths.

Fake correction

  • Since the fake correction is applied directly from Monte Carlo to data, you should make some effort to obtain a Monte Carlo sample conditions to the data, in particular with a similar beamspot length. We've found that the energy and the generator don't really make a difference. It is vital, however, that all of the truth information is included, particularly information on secondary particles, which is often stripped to save space.
  • You should already have run over the D3PD in the previous step, for the masking correction. If not, follow the directions for TrackingAnaMC above, and complete the masking correction initialization.
  • Initialize the fake correction:
             initializeFakeCorrection -t <MC sample name>
          
  • This creates a "lookup TGraph," mapping the number of vertices reconstructed per event (times a masking correction factor) to the average number of fake vertices per event.

Deadtime corrections and trigger prescales

If your data comes from a "real" trigger rather than a random trigger (for example, an MBTS trigger), then a correction needs to be applied for deadtime. The scripts for deadtime calculation and prescale determination can be found at VdM/scripts/deadtime.
  • Hopefully you won't need to use this very much; random triggers are almost always used, with the exception of vdM scans.
  • Note that the original script was written by David Berge in Python. This was very slow, so I translated a bunch of it to C++. See deadtime cxx for more details. So, you should use deadtime.cxx rather than the .py scripts.

Transformation Matrix

This is only needed for Unf method and is the only correction needed for it.

You main program is VtxLumiUnfold/util/GetTransferFunctionMC.cxx. Call with -h for up-to-date help:

$ ${TestArea}/RootCore/bin/GetTransferFunctionMC -h
Usage: GetTransferFunctionMC [options] inputD3PDs
Options:
         t: Tree name
         d: debug value
         e: Max events
         o: Output file name

A typical usage will be:

$ ${TestArea}/RootCore/bin/GetTransferFunctionMC -o TransferFunctionMC.root /path_to_d3pd/input_d3pd.root

After results are ready, remember to updated VdM/Root/GlobalSettings.cxx file with the correct path before proceeding to the next step.

Production of raw counts and histograms

In order to get the raw numbers for the measurements we run over the corresponding D3PD. It can be the vdM scan, or the mu-scan or any regular physics fill. The corresponding program is D3PDData/bin/TrackingAna. The most up-to-date help is available calling the program with no arguments:

$ D3PDData/bin/TrackingAnaData
D3PDData/bin/TrackingAnaData [-h] [options] [InputFile1.root [InputFile2.root [...]]]
Options:
         e MaxEvents: Set number of events to process.
         o OutputPrefix: Set output prefix (.root and .txt will be added).
         v: Print verbose debug.
         t TriggerChain: Select events passing TriggerChain.
         c chi2Cut: Apply cut on chi2/ndf of reconstructed vertices.
         C Chain: Name of input tree.
         h: Print this screen.
         p: Enable mode for regular physics runs. (currently not used)
         i InputFileName: Uses files listed in InputFileName as input. (currently disabled)

For basic usage you just need to specify the output file name (with no .root extension!) with the -o option and give the input D3PDs as arguments. The program will detect special known run numbers (vdM or mu scans) to ensure the proper pseudo-LB and BCID numbers are loaded.

Also remember to edit VdM/Root/GlobalSettings.cxx file with the correct paths/file names for the various corrections and output location.

Note: If you need to run over a MC sample (for closure-test or such), you can enforce a fake pseudo-LB binning based on the number of true interactions, defining INPUT_MC instead of INPUT_DATA as compiler-level symbol. In order to do that, edit D3PDData/cmt/Makefile.RootCore. Make sure you revert the change when you need to run on data again (todo: move to run-time option).

Monte Carlo closure test

In general you should basically be able to run the same procedure on MC that you use for standard runs (see Physics run analysis section below). One important step is that when running D3PDData over the D3PD you need to enable the special INPUT_MC flag at compilation stage, as outline above, in the dedicated section. This will allow you to bin results as function of the number of interactions.

For Vertex unfolding only, we also have a dedicated quick program to make closure tests. It can be found in VtxLumiUnfold/util/UnfoldClosure.cxx. Help as usual calling the program with -h option. The input file is an root file with various histograms, both containing true and reconstructed number of vertices. You can generated these input file using VtxLumiUnfold/util/GetNVtxMuSlices.cxx first and then running over with VtxLumiUnfold/macros/Get1DNVtx.C (runs interactively in ROOT, the function GetAll1DNVtx(...) allows you to unpack the 2D histograms output of the GetNVtxMuSlices into separate 1D histograms that can be used directly as input to UnfoldClosure.

van Der Meer scan analysis

The pileup corrections are applied to the raw histograms/numbers using the tools in the vdM/ package. In addition the obtained scan distributions (counts vs separations) are fit to extract the visible cross sections values. Correction on deadtime and trigger prescales are also applied when needed.

In order to process a given Van der Meer scan, you can use the following command (as example for the May 2011 scan, run 182013, and running the unfolding-based analysis):

$ ${TestArea}/RootCore/bin/runVdM -r 182013 -s 17.2-normal -n -v -f
You should also redirect the output to a log file given there's a lot of print-out which can be useful for debugging and see if things make sense.

Call with -h option to see an help on what each flag does (do it for up-to-data options):

$ ${TestArea}/RootCore/bin/runVdM -h
runVdM -r run -s settings [-e | -u] [-n] [-u systName] [-v]
r: Specify run number (178013, 178064, 183013, 201351)
s: Specify tracking settings (16.X-normal, 17.2-normal, 17.2-VtxLumi)
n: Delete previous histograms
u: Enable given systematic uncertainty
v: Write out debug histograms

By default VtxC analysis is run, otherwise:
e: Doing event-based analysis EvC (default: VtxC)
f: Doing unfolding-based analysis UnfC (default: VtxC)

You should first edit the beginning of the file util/runVdM.cxx in order to set the correct paths for getting the needed D3PD and corrections. See the source file for a description on what the expected directory structure is. One day we'll migrate this to a common configuration file.

You can then look at results using the helper macro VtxCalibration/util/tables.cxx, help as usual:

$ ${TestArea}/RootCore/bin/tables: invalid option -- h
Usage: tables -r run -s settings -m method -p path

as example:

$  ${TestArea}/RootCore/bin/tables -p /eliza18/atlas/spagan/VtxLumi/run/VtxUnfold/results/vdM/ -r 182013 -s 17.2-normal -m NUnf

where the method argument can be either NVtx, NEvt or NUnf.

Also to have extra-plots of consistency of the calibration (among BCID, algorithms, fit methods, etc..), you can use VdM/util/VdMPlots.cxx program. The syntax is very similar to the one of tables, but it's mandatory to run it for a fixed number of tracks. Also note that the output will be placed in the same output folder of the main vdM analysis macro, in the subdirectory figures. This directory must exist before the program is lunched (so the first time you need to create it).

Example of usage:

$ ${TestArea}/RootCore/bin/VdMPlots -r 182013 -s 17.2-normal -m NVtx -n 5
where -n specify the number of track cuts, while the other options are the same as for the program tables described above.

Looking up the vdM results

The vdM scan results (visible cross sections and fit information) are stored in a TTree. The class VtxCalibration.cxx provides an interface between the TTree and whatever needs to look up a visible cross section; it picks out the appropriate fit function, and calculates the average visible cross section from all scans and BCIDs.

Each branch in the tree is a vector, which stores information from several different fit functions in a parallel manner. You can see the order of the fit functions in VdM/util/runVdM.cxx. The current set of fit functions is:

  • Single gaussian
  • Single gaussian plus constant (constant=background)
  • Single gaussian plus constant (constant=luminosity)
  • Double gaussian
  • Double gaussian plus constant (constant=background)
  • Double gaussian plus constant (constant=luminosity)
  • Spline
At some point, we should try to implement some of the fits used by the Columbia group for their 2.76 TeV analysis, such as gaussian time a quartic polynomial.

mu scan analysis

First, of course, you need to download the D3PD for run 188951, and process it with D3PDData. After you've done this, the actual luminosity vs. lumiblock is calculated using LumiVtx/util/GetLuminosity.cxx. Once you've setup the usual settings, the syntax is pretty straightforward. Call the program with,as usual, the option -h for up-to-date help.

[AtlasVtxLumi_17.2.X - 17.2.1.4] 17.2-normal$ /home/spgriso/code/AtlasVtxLumi_17.2.X/RootCore/bin/GetLuminosity -h
/home/spgriso/code/AtlasVtxLumi_17.2.X/RootCore/bin/GetLuminosity -r run -s settings [options]
Options:
         -r, --run Run: Set run number to Run
         -s, --settings Settings: Set reconstruction setup Settings
         -p path: Specify output path, outPrefix will be biult as path/run/settings[/syst]/lumi_r
         -o, --outPrefix OutPrefix: Specify full output prefix for output files. Directories must already exist!
         -v, --verbose: Enable verbose printout.
         -m: Force re-making pileup correction.
         -f Scale: Enable mu-scaling in fake correction of Scale
         -u Syst: Enable systematics Syst
         -n nTrkCut: Run only with nTrkCut track cut
         -i bcid: Run only on the given bcid
         -h, --help: Print this screen.

As example:

$ ${TestArea}/RootCore/bin/GetLuminosity -r 188951 -s 17.2-normal -p /u/spgriso/code/AtlasVtxLumi_17.2.X/run/VtxUnfold/results/muScan/ -v

This will produce results for all known algorithms and track cuts.

Other algorithms

The easiest way to get luminosity for other algorithms is to use Eric Torrence's magic utility. This is a little bit separate from the rest of the analysis, but is necessary if you want to make the usual vertex-over-BCM plots.
      cd $(Dedicated test area, NOT the RootCore test area!)
      asetup 17.0.4,here,builds
      cmt co Database/CoolLumiUtilities
      cd Database/CoolLumiUtiliites/cmt
      gmake
      calibNtupleMaker.py --usage
   
There is a second utility in the same folder, vdMScanMaker.py, which can be used instead if you need luminosity in pseudolumiblocks, rather than lumiblocks.

Making Plots

In order to have few nice plots comparing the estimated vertex luminosity with standard algorithms, you can use the script LumiVtx/scripts/plots.py. The syntax is
plots.py -r run -s settings -p path
where run and settings are the usual run number and setting. The parameter path is the base path to the input file (the output of GetLuminosity) before the run number folder (i.e. the same format used in GetLuminosity). As example:
$ ${TestArea}/LumiVtx/scripts/plots.py -r 188951 -s 17.2-normal -p /u/spgriso/code/AtlasVtxLumi_17.2.X/run/VtxUnfold/results/muScan
The output will be saved in the same directory where the output of GetLuminosity is, as pdf files.

(Normal) Physics run analysis

In principle the same program used for the mu-scan can be applied to any physics run with analogous syntax. However how the program currently implement the various pile-up correction may be not optimal: the masking correction is evaluated for each LumiBlock, and the statistics may be too poor for this.

-- SPaganGriso - 20-Jun-2012 -- JamesRobinson - 06-Jul-2012

Edit | Attach | Watch | Print version | History: r22 < r21 < r20 < r19 < r18 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r22 - 2012-07-20 - SPaganGriso
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback