Location of Gen-level code

/cmshome/khurana/HZZ2l2tauAnalysis/MCCheck/CMSSW_4_2_5/src/Analyzer/MyAnalyzer

Plotting macros for 2011

/cmshome/khurana/NewHZZAnalyzer/Moriond13/WithElectronScale/CMSSW_5_3_5/src/ADDmonophoton/TreeMaker/NewPostAnalyzer/2011/2011Datacards

Instructions to run code to get results for 2l2tau analysis

Location of 2011 EDAnalyzer :

/cmshome/khurana/NewHZZAnalyzer/CMSSW_4_2_5/src/ADDmonophoton/TreeMaker/

Location of 2012 EDAnalyzer

/cmshome/khurana/NewHZZAnalyzer/Moriond13/WithElectronScale/CMSSW_5_3_7/src/ADDmonophoton/TreeMaker

Analysis Framework 2011 & 2012 are not same.

For 2011 it is 2 step process :

1. EDAnalyzer : Store necessary physics object information in root tree

2. PostAnalyzer : Which uses O/P of step 1 and apply selection cuts for each final state and save results in form of histograms and tree.

For 2012 it is 3 step process :

1. EDAnalyzer : Store necessary physics object information in root tree

2. Skimmer : Since data files are too heavy for 2012 and may loose because of storage issue and take long time if run over all events, its better to skim with minimal information required for 2l2tau analysis.

3. PostAnalyzer : This uses O/P of step 2 and apply selection cuts for each final state and save results in form of histograms and tree. This is similar to step 2 of 2011 but runs quickly because number of events are less.

In future I will change the framework for 2011 data analysis also so that single code can be used for 2011 & 2012.

Root tree with all information about the required objects are located at cmssusy and be accessed at

###########################  For Data ###################
## For data 2012 
ls -ltr   /lustre/cms/store/user/khurana/HZZAnalysis/ForPaper/2012ABCDWithNewECAL
## For data 2011
ls -ltr   /lustre/cms/store/user/khurana/HZZAnalysis/ForPaper/2011Data_Again_WithSIP
## For Merged data of 2011
ls -ltr   /lustre/cms/store/user/khurana/HZZAnalysis/ForPaper/Merged_2011Data


######################### For MC #######################
## For MC 2012

## For MC 2011

Run Skimmer

To setup directory structure & compile the code execute the script

cd HZZMacroAndScripts/Skimmer
./SetUpJobSubmission.sh datedir 
## for example ###    source SetUpJobSubmission.sh 19Sept2013ForHighMassPaper

This will create a directory named SkimmerSubmission_${datedir} and copy relevant scripts to this new directory Now go to this newly created directory and run the submission script with input file directory as listed in the begining and outputdatedir

cd SkimmerSubmission_19Sept2013ForHighMassPaper

## For Data
./SubmitBatchFor2012Skimmer.sh /lustre/cms/store/user/khurana/HZZAnalysis/ForPaper/2012ABCDWithNewECAL/ 19Sept2013ForHighMassPaper
## For Old MC
./SubmitBatchFor2012Skimmer.sh /lustre/cms/store/user/khurana/HZZAnalysis/ForPaper/MC/28March_NewMVAID/ 19Sept2013ForHighMassPaperMCOld
## For New MC
./SubmitBatchFor2012Skimmer.sh /lustre/cms/store/user/khurana/HZZAnalysis/ForPaper/Merged_2012MCWithNewECAL/ 19Sept2013ForHighMassPaperMCNew

Resubmitting failed batch jobs

Once all jobs are done please check that all jobs are done successfullly. In 1-2% of the cases I have seen that jobs are crashed because of input and output error and need a resubmission. This number of failed jobs is always very less but difficult to find them manually when total jobs are more than multiple of 10. In order to avoid such situations please use

ResubmitErrorJobs.sh &  ResubmitAll.sh to resubmit all the failed jobs. 

do either 
source  ResubmitErrorJobs.sh  dirname_in_which_jobs_were_submitted  ## to resubmit jobs from one directory

## or 

source ResubmitAll.sh ## to resubmit jobs from all directory but make sure you first create a text file with list of all directories.
Once all batch jobs are successfully done you are ready to merge the files.

Merging the Skimmed files

Once skimming jobs are done you can start Merging the output files came from the skimming Script to merge the files are also in the same directory

## To merge data file use 
 ./MergeDataFiles.sh /lustre/cms/store/user/khurana/SkimmerFiles/SkimmedRootFiles_19Sept2013ForHighMassPaper/ /lustre/cms/store/user/khurana/Merged_SkimmerFiles/Merged_SkimmedRootFiles_19Sept2013ForHighMassPaper

## To merge Old MC Files use 
./MergeDataFiles.sh  /lustre/cms/store/user/khurana/SkimmerFiles/SkimmedRootFiles_19Sept2013ForHighMassPaperMCOld/  /lustre/cms/store/user/khurana/Merged_SkimmerFiles/Merged_SkimmedRootFiles_19Sept2013ForHighMassPaperMCOld


## To Merge New MC Files use 
## but the is no need to merge them
 ./MergeDataFiles.sh /lustre/cms/store/user/khurana/SkimmerFiles/SkimmedRootFiles_19Sept2013ForHighMassPaperMCNew/  /lustre/cms/store/user/khurana/Merged_SkimmerFiles/Merged_SkimmedRootFiles_19Sept2013ForHighMassPaperMCNew/

Running Postanalyzer on Merged Skim files

PostAnalyzer .C and .h files are located inside the directory
cd  HZZMacroAndScripts/2012PostAnalyzer

This directory also contain shell & python script to do the further processing of the tuples. is the script to prepare setup area this is similar script which was used at skimming step.

./SetUpJobSubmission.sh 19Sept2013ForHighMassPaper

## For Data
./SubmitBatchFor2012PostAnalyzer.sh /lustre/cms/store/user/khurana/Merged_SkimmerFiles/Merged_SkimmedRootFiles_19Sept2013ForHighMassPaper 19Sept2013ForHighMassPaper
## For Old MC
./SubmitBatchFor2012PostAnalyzer.sh /lustre/cms/store/user/khurana/Merged_SkimmerFiles/Merged_SkimmedRootFiles_19Sept2013ForHighMassPaperMCOld 19Sept2013ForHighMassPaperMCOld

## For high mass GGH 
source RunOnMerged_GGH_2012.sh /lustre/cms/store/user/khurana/SkimmerFiles/SkimmedRootFiles_19Sept2013ForHighMassPaperMCNew/NewSignalSample/ /lustre/cms/store/user/khurana/PostAnalyzer/ Merged_PostAnalyzerFiles_19Sept2013ForHighMassPaperMCHighMass

## For high mass QQH
source RunOnMerged_QQH_2012.sh /lustre/cms/store/user/khurana/Merged_SkimmerFiles/Merged_SkimmedRootFiles_19Sept2013ForHighMassPaperMCOld/ /lustre/cms/store/user/khurana/PostAnalyzer Merged_PostAnalyzerFiles_19Sept2013ForHighMassPaperMCHighMass


Merging the postanalyzer files

Merged these output files from postanalyzer using
MergeDATARootFilesCleanDirectory.sh
& 
MergedDataFiles.sh

Control Region & Fake Rate Plots

Once all postanalyzer files are merged you can see some nice plots using the macros placed inside Results. In results you will have to calculate following things :

1. Some nice plots using Results/StackFactory.C . This will plot Data / MC plots for

a. control region for all final states

b. cut flow plots for all final states .

c. Will add functionality to print cut flow numbers for each final states (in form of latex)

d. Fake rate plots for electron, muon and taus.

e. Give One root file "FRRootFile.root" as output which have value of jet -> e, mu, tau fake rates which will used for reducible background estimation.

In this script you have to change the input path of the data/MC where postanalyzer files are located. After this change the date directory in which plots will be saved. Run this macro simply using

root -l -b -q StackFactory.C

2. Use this rootfile as an input to reducible background estimation code. This is further divided into two steps :

a. When Z1 is fake : Code is in the directory Results/ReducibleEstimate_Reverse

b. When Z2 is fake : Code is in the directory Results/ReducibleEstimate

Shell script Results/ReducibleJobsSumbission.sh will submit jobs for electron and muon data for both Z1 and Z2 fake once you compile the code in both the directories using

cd ReducibleEstimate
source compile Run.C run.exe  ## this will create an executable run.exe
cd ..
cd ReducibleEstimate_Reverse/
source compile Run.C run.exe  ## this will create an executable run.exe
cd ..

One you have compiles the codes in both the directories you have to change pathofInputDataFiles & prefixForOutputRootFile in the script ReducibleJobsSumbission.sh Run this script to submit batch jobs for reducible background estimate.

source ReducibleJobsSumbission.sh ## this will take ~1 hr to complete

It will take ~1 hr to finish these jobs so in the mean time lets set up the code for data cards and input rootfiles for datacards ie step 3. Continue step 2 once jobs are done ..

Once jobs are done you can check the reducible background using the script CalculateTotalReducibleBackground.sh. You have to change the "prefisstring" which you have used in the jobs submitting script ReducibleJobsSumbission.sh.

source CalculateTotalReducibleBackground.sh
This will give a rootfile "ReducibleBkg.root" as output which contain reducible estimate in each of the category with total error. This rootfile will be used by datacard making script.

3. Next step in results is to get the signal yield and ZZ background estimate from the MC samples we have already processed. This is done using two python scripts : ExtractNormalisedYield.py & GetYieldForAllSamples.py.

GetYieldForAllSamples.py : This file list all the input files and run ExtractNormalisedYield.py for each of the sample.

ExtractNormalisedYield.py : This will take input as rootfile and save a 2d histogram saving yield for each of the final state and each of the mass point. This will work for GGH, VBF and ZZ background. More functionalities can be added in future.

Change all the input paths in GetYieldForAllSamples.py (use merged postanalyzer path) and run this script using python.

python GetYieldForAllSamples.py

Once you have the output of this script ie Result.root you can now prepare the datacards and rootfiles for limit calculation. Scipts for rootfile prepration are :

AllHistograms_NewMC.sh & CloneHistograms.C

run it using :

source AllHistograms_NewMC.sh

And to get the datacards use MakeDataCards.py

python MakeDataCards.py 

These two steps can be combines using another script

python PrepareDatacardsAndRootFiles.py
Edit | Attach | Watch | Print version | History: r16 < r15 < r14 < r13 < r12 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r16 - 2014-05-24 - RamanKhurana
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback