Single LQ Analysis

8 TeV results

Outline of overall analysis

NTuple production

Production of flat trees for plotting and analysis

  • Flat trees are produced for plotting and analysis. Each branch is simply one variable, such as "pt_jet1" or "M_mumu", for easy plotting with root macros of the user's preference, and further analysis (limit setting, background studies, etc).
  • To check out the flat tree production, do:

cmsrel CMSSW_7_4_5
cd CMSSW_7_4_5/src/
git clone git@github.com:dnash86/NTupleAnalyzer.git
cd NTupleAnalyzer/
git checkout SingleLQAnalysis2015_PortFromDave

  • The analyzer itself is in NTupleAnalyzer.py, with supporting modules in objectID.py, variableComputation.py, and getComplexObjects.py. These supporting modules should in general be unchanged by the user. Users can add additional variables for analysis use, but object identification criteria should agree amongst subgroups in the LQ group, and thus changes should either not be made, or be made and other analyzers be notified.
  • NTupleAnalyzer.py is in turn run by AnalyzerMakerFastLocal.py. These are relevant flags for running:
    • "-i ntuplelist.csv" This is the list of the NTuples to run over. It must be formatted correctly (details to come).
    • "-py NTupleAnalyzer.py" This is the analyzer that is run, in this case the base NTupleAnalyzer.py. If orthogonal versions are made, replace it here to run the new version
    • "-t tag_name" Put the tag of the run here, for example "RunDYTest".
    • "-j json.txt" This is the json file of good runs. Necessary for data, while running MC feel free to just put a dummy file that need not exist.
    • "-p 1" Store pdfs or not. Can be a 1 or a 0.
    • "-q 1nd" What batch queue to submit to in lxbatch. Must exist, so: 8nm, 1nh, 8nh, 1nd, 2nd, 1nw, etc... (Typically use 8nh/1nd)
    • "-s 100" Job splitting. So this would run 100 root files per job
    • "--merge" Not necessary, but generally used. Use this to automatically merge the results.
    • "-m ee" Mode. Modes currently implemented are "ee", "emu", and "mumu". This propagates the mode to NTupleAnalyzer.py, changing the skim and the geometric separation of leptons and jets for the appropriate final states.

  • Putting this all together, here is a sample command to run the analyzer over some DY MC:

python AnalyzerMakerFastLocal.py -i NTupleInfo2015Full_Zjets.csv -py NTupleAnalyzer.py -t tag_name -j json.txt -p 1 -q 1nd -s 100 --merge -m ee

  • One can either run this in a screen on lxbatch, or run it and then ctrl-C/Z out of the process and return to it by issuing the same command with a "-d {working directory that is created during running}". This will pick up where the previous command left off (resubmitting jobs that failed, checking jobs for output, and eventually merging the results).

-- DavidNash - 2015-11-02

Edit | Attach | Watch | Print version | History: r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r1 - 2015-11-02 - DavidNash
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback