ggF and VBF: well advanced within the SM HTT. Future, followed by studying VH production in tautau decay
- tau decays in JHUGen samples (presumably done by Tauola, i.e. LHE -> Pythia & Tauola) does not agree with those expected from POWHEG samples (which is also relying on Tauola for decaying taus). Reason could be Meng provided explanation is in mdtau set to 240 in JHUGen samples. It needs to be verified that after removing this cut in JHUGen samples, the problem with tau decays is resolved and it agrees with those from POWHEG.
- in tauola, mdtau = 0 - all decay modes, mdtau is the variable that handles tau decay mode. remove following line in ExternalDecays module.
mdtau = cms.int32(240)
,
HTT-analysis
FSAntuplizing
- then there are three steps: skimming, svfit and analysis
- FSA skimming for each final state
- goto
/afs/hep.wisc.edu/home/kaur/CMSSW_8_0_26_patch1/src/SMHTT2016/mt/Skim/Mar7
- do
./Make.sh skim_mt.cc
- then edit and run
python prepare_run2.py
- then
sh do_submit.sh
- SVfit mass calculation:
- checkout: https://github.com/truggles/SubmitSVFit
- MY working area : /afs/hep.wisc.edu/home/kaur/CMSSW_8_0_25/src/SubmitSVFit/test
- run
SVFitStandAloneFSATauDM inputFile=/nfs_scratch/kaur/mutau_sv/Out_VBF0PM/VBF0PM9.root newOutputFile=1 newFile=tmpOut2.root recoilType=0 doES=1 isWJets=0 metType=-1
- or for condor do
-
gsido mkdir /hdfs/store/user/kaur/SKMD_Apr18/VBF0PM
,
-
gsido rsync -ahP /nfs_scratch/kaur/mutau_sv/Out_VBF0PM/ /hdfs/store/user/kaur/SKMD_Apr18/VBF0PM/
,
-
python svFitSubmitter.py -sd /hdfs/store/user/kaur/SKMD_Apr18/VBF0PM -es=0 -r=0 -iswj=0 -mt=-1 --jobName svFit_Apr18
-
- then analysis,
- MY area:
= CMSSW_8_0_26_patch1/src/SMHTT2016/mt/Analyze/=
- do
./Make.sh FinalSelection2D_relaxed.cc
- ./FinalSelection2D_relaxed.exe TauTau_13_Out_ggH125-ggH1250.root files_nominal/_apr18VBF125.root VBF125 qqH125 1
-
python Unroll_2Drelaxed.py
UW skimmed and svFit samples:
- Tyler: VBF samples I have been using are here: /hdfs/store/user/ndev/LFV_feb18_mc/VBFHToTauTau_M125_13TeV_powheg_pythia8_v6-v1/
- You will still need to skim those ntuples and svFit them. If you want already skimming tautau channel ntuples see: /hdfs/store/user/truggles/svFitMar20_SM-HTT/Recoil2_TES1_WJ0/TauTau_13_Recoil2_TES1_WJ0-VBFHtoTauTau125_*_tt.root
- Data files here: /hdfs/store/user/truggles/svFitFeb17_SM-HTT/Recoil0_TES0_WJ0/TauTau_13_Recoil0_TES0_WJ0-dataTT-*
- You can find all of my background samples in the same general folder structure. hdfs/store/user/truggles/svFitMar20_SM-HTT The strange directory "Recoil0_TES0_WJ0" tells which recoil type was applied, if tau energy scale was applied and if the sample is wjets.
- Cecile:
- mutau ntuples here: /hdfs/store/user/caillol/smhmt_svfitted_20march/
- but there are two branches missing (eta and phi for H) that I use to reconstruct HIGGS:
higgs.SetPtEtaPhiM(pt_sv, eta_sv, phi_sv, m_sv)
- eta and phi of the Higgs in these files: /hdfs/store/user/caillol/smhmt_svfitted_DESYlike_Lauralike/
Datacards and root files from HTT
to develop MELA discriminator for limit setting
NTuples with all the FSA variables plus MELA discri.
- variables of interest: mjj (dijet mass), m_sv (higgs mass - using svfit), njets (# jets), KD_bsm_mlt (MELA D_0-), KD_int (D_CP)
-
= cd /afs/hep.wisc.edu/home/kaur/CMSSW_8_0_26_patch1/src/ZZMatrixElement/MELA/test/=
(workign area)
- VBF-0PM: treeME_mrgd_svFit_June8_VBF0PM.root
- VBF-0M : treeME_mrgd_svFit_June8_VBF0M.root
- treeME_mrgd_svFit_June8_VBF0Mf05ph0.root
- VBF-0PH: treeME_mrgd_svFit_June8_VBF0PH.root
- VBF-0PHf05ph0: treeME_mrgd_svFit_June8_VBF0PHf05ph0.root
- treeME_mrgd_svFit_June8_VBF0L1f05ph0.root
- treeME_mrgd_svFit_June8_VBF0L1f05ph0.root
- treeME_mrgd_svFit_June8_HJJMIX.root
- treeME_mrgd_svFit_June8_HJJPS.root
- treeME_mrgd_svFit_June8_HJJSM.root
Fill tree variables into 3d histos
- get my code :
https://github.com/lovedeepkaursaini/aHVV_HTT/blob/master/Fill3D.C
, and https://github.com/lovedeepkaursaini/aHVV_HTT/blob/master/Fill3D.h
- run root
.L Fill3D.C+
, Fill3D l
, l.Loop()
Unroll 3d to 1d
Smoothening of templates
- to learn how to produce templates for a given coupling strength and smooth the templates to reduce the effect from the stat. uncertainties of the samples.
- get https://github.com/jbsauvan/TemplateBuilder for building the templates
- Ulascan's example: /afs/cern.ch/work/u/usarica/public/forHTT/fL1Example, json files for input/output are located in run/fL13D/builder_7(8)TeV/*.json
Data-cards and workspaces (H-ZZ-4l)
- Instructions from Ulascan:
- You need to first create templates. For initial studies, you might not need systematics,
or only add those you already know are not related to fa3 limits (e.g. bkg systematics, production cross section uncertainties).
While might not know much about the technical details of your setup, I could suggest that at least a comparison of the fa3=0, fa3=1
and interference templates with different signal jet systematics is instructive to say wich ones can be kept as shape+normalization
and which ones do not need to vary shape. For making templates, you can use the TemplateBuilder I pointed to you earlier.
I also include Heshy in this thread, so that he can also help with more instructions on how to create templates.
Pardon my ignorance, but which processes do you consider in your analysis?
- The combine setup you need is
https://github.com/usarica/HiggsAnalysis-CombinedLimit/tree/FTpl
There is already a PR to the main repository from the branch, which is pending te completion of the 4l analysis – but it is pretty much usable.
There will be changes to this branch, but the changes I will be making mainly concern standardizing physics model names,
adding offshell model etc that may or may not affect you. In case they do, I will let you know immediately, as I let Heshy and others know.
The physics model is MultiSignalSpinZeroHiggs:
https://github.com/usarica/HiggsAnalysis-CombinedLimit/blob/FTpl/python/SpinZeroStructure.py#L252
The parameters of interest are called CMS_zz4l_fai1/2 (though I am planning to change them over the weekend to zz2e2mu_fai1/2
in case there is ever a combination with ATLAS). Other than that, there is not much name change anticipated in this model.
The datacard makers we have run so far are pretty specific to 4l, but I am coming up with a more generic one here:
https://github.com/HZZ4l/CreateWidthDatacards/tree/WidthJCP_13TeV_2016
This datacard maker is anticipated to work for both onshell Higgs and offshell treatments,
where interference with background become sizable and therefore relevant. In the next few days, I will also commit an example input format and templates so that the datacard maker is usable by you and others.
Dat-cards and workspaces Hbb
- Instructions from Ben to make the ZH lines in Figure 4 of the paper HIG14035:
setenv SCRAM_ARCH slc5_amd64_gcc472
cmsrel CMSSW_6_1_1
cd CMSSW_6_1_1/src
cmsenv
git clone https://github.com/benjaminkreis/HiggsAnalysis-CombinedLimit HiggsAnalysis/CombinedLimit
scramv1 b clean; scramv1 b
rehash
MY NOTE: go to a new version of combine in SL6
INSERT THIS: https://github.com/benjaminkreis/HiggsAnalysis-CombinedLimit/blob/master/python/HiggsJPC_combo.py
some datacards and root files: cp /uscms_data/d1/jstupak/fa3Combo/CMSSW_6_1_1/src/HiggsAnalysis/CombinedLimit/VHbb/cards/zhDR_p0plusp1_050915_correlatedBkg_sigThrsld2.5_deCorrdMedHighNuis_correctVV .
create the workspace from these datacards and root files. For ZH,
text2workspace.py -m 125.6 datacard_Zh.txt -P HiggsAnalysis.CombinedLimit.HiggsJPC_combo:twoHypothesisHiggs --PO=muFloating -o myworkspace.text2workspace.root -v 7
expected fa3^ZH scan:
combine -M MultiDimFit myworkspace.text2workspace.root --setPhysicsModelParameters cww_zz=0.5,r_ww=1,r_box=1,r=1,r_qq=1 --freezeNuisances r_ww,cww_zz,r_box,r_qq --setPhysicsModelParameterRanges CMS_zz4l_fg4=0,1 --algo=grid --points 50 -m 125.6 -n 1D_exp -P CMS_zz4l_fg4 -t -1 --expectSignal=1
observed fa3^ZH scan:
combine -M MultiDimFit myworkspace.text2workspace.root --setPhysicsModelParameters cww_zz=0.5,r_ww=1,r_box=1,r=1,r_qq=1,wh_medBoost_trend=.3,wh_highBoost_trend=-.8,zh_medBoost_trend=.18,zh_highBoost_trend=-0.23 --freezeNuisances r_ww,cww_zz,r_box,r_qq --setPhysicsModelParameterRanges CMS_zz4l_fg4=0,1 --algo=grid --points 50 -m 125.6 -n 1D_obs -P CMS_zz4l_fg4
draw them by opening the resulting root file and doing
limit->Draw("2*deltaNLL:CMS_zz4l_fg4")
Get nevents
less myfile.lhe | grep "" | wc -l
less VBFHiggs0PHf05ph0_M-125_13TeV-JHUGenV6_0.lhe.xz | grep "<event>" | wc -l
50000
merge lhe's
merge them with the tool described at the end of this little section:
https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideSubgroupMC#1_2_Using_pLHE_campaigns
Tool to compute couplings
theory:
- HIG-14-018 H->WW and H->ZZ legacy:
- Studies initiated by Snowmass 2013 showing power of VH and VBF channels:
- MC generator we used for VH:
-- LovedeepKaurSaini - 2017-02-09