IficUWAnalysis

Introduction

This page is created by IFIC and UW. It collects useful informations about the group analyses. In order to edit TWiki pages you have to register with your AFS password

Useful links

Trigger

Data Samples for Higgs Analysis

  • Monte Carlo:
    • SM mc12_8TeV_p1328 (skimmed ):
      • COMPLETE root://valtical.cern.ch//localdisk/xrootd/users/qing/MC/mc12_p1328
        • user.qing.mc12_8TeV.107664.AlpgenJimmy_AUET2CTEQ6L1_ZmumuNp4.merge.NTUP_SMWZ.e1571_s1499_s1504_r3658_r3549_p1328_2LepSkim_v2/
        • user.qing.mc12_8TeV.107663.AlpgenJimmy_AUET2CTEQ6L1_ZmumuNp3.merge.NTUP_SMWZ.e1571_s1499_s1504_r3658_r3549_p1328_2LepSkim_v2/
        • user.qing.mc12_8TeV.107660.AlpgenJimmy_AUET2CTEQ6L1_ZmumuNp0.merge.NTUP_SMWZ.e1571_s1499_s1504_r3658_r3549_p1328_2LepSkim_v2/
        • user.qing.mc12_8TeV.107661.AlpgenJimmy_AUET2CTEQ6L1_ZmumuNp1.merge.NTUP_SMWZ.e1571_s1499_s1504_r3658_r3549_p1328_2LepSkim_v2/
        • user.qing.mc12_8TeV.107662.AlpgenJimmy_AUET2CTEQ6L1_ZmumuNp2.merge.NTUP_SMWZ.e1571_s1499_s1504_r3658_r3549_p1328_2LepSkim_v2/
        • user.qing.mc12_8TeV.107665.AlpgenJimmy_AUET2CTEQ6L1_ZmumuNp5.merge.NTUP_SMWZ.e1571_s1499_s1504_r3658_r3549_p1328_2LepSkim_v2/
        • user.qing.mc12_8TeV.107653.AlpgenJimmy_AUET2CTEQ6L1_ZeeNp3.merge.NTUP_SMWZ.e1571_s1499_s1504_r3658_r3549_p1328_2LepSkim_v2/
        • user.qing.mc12_8TeV.107650.AlpgenJimmy_AUET2CTEQ6L1_ZeeNp0.merge.NTUP_SMWZ.e1571_s1499_s1504_r3658_r3549_p1328_2LepSkim_v2/
        • user.qing.mc12_8TeV.107652.AlpgenJimmy_AUET2CTEQ6L1_ZeeNp2.merge.NTUP_SMWZ.e1571_s1499_s1504_r3658_r3549_p1328_2LepSkim_v2/
        • user.qing.mc12_8TeV.107654.AlpgenJimmy_AUET2CTEQ6L1_ZeeNp4.merge.NTUP_SMWZ.e1571_s1499_s1504_r3658_r3549_p1328_2LepSkim_v2/
        • user.qing.mc12_8TeV.107651.AlpgenJimmy_AUET2CTEQ6L1_ZeeNp1.merge.NTUP_SMWZ.e1571_s1499_s1504_r3658_r3549_p1328_2LepSkim_v2/
        • user.qing.mc12_8TeV.107655.AlpgenJimmy_AUET2CTEQ6L1_ZeeNp5.merge.NTUP_SMWZ.e1571_s1499_s1504_r3658_r3549_p1328_2LepSkim_v2/
        • user.qing.mc12_8TeV.161595.PowHegPythia8_AU2CT10_VBFH125_tautaull.merge.NTUP_SMWZ.e1222_s1469_s1470_r3542_r3549_p1328_2LepSkim_v2/
        • user.qing.mc12_8TeV.161555.PowHegPythia8_AU2CT10_ggH125_tautaull.merge.NTUP_SMWZ.e1222_s1469_s1470_r3542_r3549_p1328_2LepSkim_v2/

    • SM mc12_8TeV_p1067 (skimmed ):
      • COMPLETE root://valtical.cern.ch//localdisk/xrootd/users/qing/mc12_p1067/
        • mc12_8TeV*10766*Zmumu*Np[0-5]NTUP_SMWZ*p1067/: fully uploaded to cern cluster
        • mc12_8TeV*10765*Zee*Np[0-5]NTUP_SMWZ*p1067/ :fully uploaded to cern cluster
        • mc12_8TeV*VBFH[100-150 at step 5]*tautaull*NTUP_SMWZ*p1067/: fully uploaded to cern cluster
        • mc12_8TeV*ggH[100-150 at step 5]*tautaull*NTUP_SMWZ*p1067/: fully uploaded to cern cluster
      • COMPLETE /lustre/ific.uv.es/grid/atlas/atlaslocalgroupdisk/MC/mc12_p1067/
        • mc12_8TeV*10766*Zmumu*Np[0-5]NTUP_SMWZ*p1067/: fully uploaded to IFIC lustre
        • mc12_8TeV*10765*Zee*Np[0-5]NTUP_SMWZ*p1067/ :fully uploaded to IFIC lustre
        • mc12_8TeV*VBFH[100-150 at step 5 but missing 140]*tautaull*NTUP_SMWZ*p1067/: fully uploaded to IFIC lustre
        • mc12_8TeV*ggH[100-150 at step 5]*tautaull*NTUP_SMWZ*p1067/: fully uploaded to IFIC lustre
    • SM mc11c_7TeV p833 (skimmed Zee+Zmm):
      • DELETED root://valtical.cern.ch//localdisk/xrootd/users/qing/mc11c_p833/
      • DELETED /lustre/ific.uv.es/grid/atlas/atlaslocalgroupdisk/MC/mc11c_p833/
    • SM mc11b_7TeV p801 (skimmed Z) :
      • DELETED root://valtical.cern.ch//localdisk/xrootd/users/yesenia/mc11b_SKIMM_p801/
      • COMPLETE /lustre/ific.uv.es/grid/atlas/atlaslocalgroupdisk/MC/mc11b_SKIMM_p801/
    • SM mc11b_7TeV p833 (skimmed Zee + Zmm):
      • DELETED root://valtical.cern.ch//localdisk/xrootd/users/yesenia/mc11b_SKIMM_p833/
      • COMPLETE /lustre/ific.uv.es/grid/atlas/atlaslocalgroupdisk/MC/mc11b_SKIMM_p833/

  • Data:
    • data12_8TeV (2012 skimmed two lepton D3PDs rel 17.2.7.4 p1328_p1329):
      • COMPLETE ==root://valtical.cern.ch//localdisk/xrootd/users/qing/data12_8TeV/SMDILEP_p1328_p1329/==
        • data12_8TeV*period*Egamma*NTUP_SMWZ*p1328_p1329/:
          • period A,B,C,D,E,G,H,I,J,L 100%, transferred to CERN.
        • data12_8TeV*period*Muon*NTUP_SMWZ*p1328_p1329/ :
          • periodA,B,C,D,E,G,H,I,J,L 100% transferred to CERN
      • DELETED /lustre/ific.uv.es/grid/atlas/atlaslocalgroupdisk/DATA/data12_8TeV/SMDILEP_p1328_p1329/
        • data12_8TeV*period*Egamma*NTUP_SMWZ*p1328_p1329/: to be transferred
        • data12_8TeV*period*Muon*NTUP_SMWZ*p1328_p1329/ : to be transferred
    • data12_8TeV ( 2012 skimmed two lepton D3PDs rel 17.2.2.1 p1067 ):
      • root://valtical.cern.ch//localdisk/xrootd/users/qing/data12_8TeV/SMDILEP_p1067/
        • DELETED data12_8TeV*period*Egamma*NTUP_SMWZ*p1067/:periodA,B,C,D,E fully uploaded to CERN cluster;
        • DELETED data12_8TeV*period*Muon*NTUP_SMWZ*p1067/ : periodA,B,C,D,E fully uploaded to CERN cluster;
      • /lustre/ific.uv.es/grid/atlas/atlaslocalgroupdisk/DATA/data12_8TeV/SMDILEP_p1067/
        • DELETED data12_8TeV*period*Egamma*NTUP_SMWZ*p1067/:periodA,B,C,D, E fully uploaded to IFIC lustre;
        • DELETED data12_8TeV*period*Muon*NTUP_SMWZ*p1067/ : periodA,B,C,D , E fully uploaded to IFIC cluster
    • data12_8TeV (Non-official data 2012 two lepton D3PDs rel 17.2.1.3.1 p984):
      • DELETED root://valtical.cern.ch//localdisk/xrootd/users/qing/data12_8TeV/SMDILEP_p984/
      • DELETED /lustre/ific.uv.es/grid/atlas/atlaslocalgroupdisk/DATA/data12_8TeV/SMDILEP_p984/
    • data11_7TeV (data 2011 two lepton skimmed D3PDs from SM WW group rel 17: p833). Status 28/01/2012 All data imported into the xrootd cluster.
      • DELETED root://valtical.cern.ch//localdisk/xrootd/users/qing/data11_7TeV/SMDILEP_p833/
      • DELETED ==/lustre/ific.uv.es/grid/atlas/atlaslocalgroupdisk/DATA/data11_7TeV/SMDILEP_p833/==
    • data11_7TeV (data 2011 lepton-hadron skimmed D3PDs from TauWG release 17.0.3.6.1: p741). Status 02/11/2011 :
      • DELETED ==root://valtical.cern.ch//localdisk/xrootd/users/qing/data11_7TeV/NTUP_TAUMEDIUM_p716/
      • DELETED /lustre/ific.uv.es/grid/atlas/atlaslocalgroupdisk/DATA/data11_7TeV/NTUP_TAUMEDIUM_p716/
    • data11_7TeV (data 2011 two lepton skimmed D3PDs from SM WW group p503). Status 07/08/2011 All data up to, and including, periodH4:
      • DELETED root://valtical.cern.ch//localdisk/xrootd/users/yzhu/datasets/data11_7TeV/SMDILEP/
      • DELETED /lustre/ific.uv.es/grid/atlas/atlaslocalgroupdisk/DATA/datasets/data11_7TeV/SMDILEP/

  • INFO release 17 (p716) data11
    • Copied files: /afs/cern.ch/user/y/yesenia/public/MissingData/data_rel17_copied
    • Missing files: /afs/cern.ch/user/y/yesenia/public/MissingData/data_rel17_missing
    • Number of files per dataset: /afs/cern.ch/user/y/yesenia/public/MissingData/num_files

  • INFO lepton-hadron (p741) data11
    • Copied files: /afs/cern.ch/user/j/jvalero/public/SkimSlimlh/DatasetsInCluster.txt
    • Missing files: /afs/cern.ch/user/j/jvalero/public/SkimSlimlh/DatasetsInCluster.txt

Public Datasets

Tag Skimmed? Description Path Size Owners Estimated deletion date
mc12_8TeV Y Zmumu, Zee, VBFH, ggH root://valtical.cern.ch//localdisk/xrootd/users/qing/MC/mc12_p1328/ 0.27TB Juan --
data12_8TeV Y Egamma & Muons, period H,I,J,L root://valtical.cern.ch//localdisk/xrootd/users/qing/data12_8TeV/SMDILEP_p1328_p1329/ 21.92 TB Juan --
mc12_8TeV Y Zmumu, Zee, VBFH, ggH root://valtical.cern.ch//localdisk/xrootd/users/qing/mc12_p1067/ 1.45TB Juan --
mc12_8TeV Y Zmumu, Zee root://valtical.cern.ch//localdisk/xrootd/users/qing/mc12_p1344/ 2.4TB Juan --
Total -- -- -- 26.02TB ific-uw --

Deleted Datasets

Tag Skimmed? Description Path Size Owners Deletion Date
data12_8TeV Y Egamma & Muons, period A-E root://valtical.cern.ch//localdisk/xrootd/users/qing/data12_8TeV/SMDILEP_p1067/ 11.19TB Juan DELETED on Feb 11, 2013
data11_7TeV Y Egamma, Muons root://valtical.cern.ch//localdisk/xrootd/users/qing/data11_7TeV/SMDILEP_p833/ 1.82TB Juan DELETED on Feb 5, 2013
mc11c_7TeV Y Zmumu, Zee root://valtical.cern.ch//localdisk/xrootd/users/qing/mc11c_p833/ 0.84TB Juan DELETED on Feb 5, 2013
data11_7TeV Y Egamma,Muons root://valtical.cern.ch//localdisk/xrootd/users/yzhu/datasets/data11_7TeV/SMDILEP 3.4GB Ying Chun zhu DELETED on Feb 1, 2013
data12_8TeV Y Muons root://valtical.cern.ch//localdisk/xrootd/users/qing/data12_8TeV/SMDILEP_p984/ 7.2 GB Juan DELETED on Nov 26,2012
mc11b_7TeV Y Z root://valtical.cern.ch//localdisk/xrootd/users/yesenia/mc11b_SKIMM_p801/ 0.86 TB Juan DELETED on Jan 25, 2013
mc11b_7TeV Y Zee,Zmumu root://valtical.cern.ch//localdisk/xrootd/users/yesenia/mc11b_SKIMM_p833/ 0.63TB Yesenia DELETED on Nov 26,2012

Private Datasets

User Path Size
Amanda Kruse root://valtical.cern.ch//localdisk/xrootd/users/akkruse 1.32TB
German root://valtical.cern.ch//localdisk/xrootd/users/montoya 1.58TB
Gang Qin root://valtical.cern.ch//localdisk/xrootd/users/qing 0TB
Xifeng ruan root://valtical.cern.ch//localdisk/xrootd/users/ruanxf 567.91GB
Xin Chen root://valtical.cern.ch//localdisk/xrootd/users/xchen 5.47TB
Yesenia root://valtical.cern.ch//localdisk/xrootd/users/yesenia 0GB
Luca Fiorini root://valtical.cern.ch//localdisk/xrootd/users/lfiorini 93.57GB
Bruce root://valtical.cern.ch//localdisk/xrootd/users/bmellado 37.13KB
Damian root://valtical.cern.ch//localdisk/xrootd/users/daalvare 489.0GB
All root://valtical.cern.ch//localdisk/xrootd/users/ 9.5TB

Analysis Framework

Software structure

The software in the cluster is installed in /work/offline which is NFS mounted. The software is structured in CMT projects. A project folder contains a cmt/project.cmt file which describes the project.

Common project

The common software is installed in /work/offline/common . It defines the CMT release, ROOT and Python releases. Other common tools are available there. Like a stand-alone fitter and a command line arguments parser library. The setup script of the project is /work/offline/common/setup.sh

Create your own project

You can use the common project to be used by your own code. Create your project in /work/offline. Could be /work/offline/myproject, for example. Inside it, you must create a project file cmt/project.cmt with the following:
project <user>
use common

Create a setup script like the following:

#!/bin/bash
source /work/offline/common/setup.sh
export USER_PATH=/work/offline/myproject
export PATH=${USER_PATH}/installed/${CMTCONFIG}/bin:${USER_PATH}/installed/share/bin:${PATH}
export LD_LIBRARY_PATH=${USER_PATH}/installed/${CMTCONFIG}/lib:${LD_LIBRARY_PATH}

Access SVN

We have an SVN reposotry here: https://svnweb.cern.ch/cern/wsvn/ificuwrepo/. The repository is accessible only to IFIC and UW members. If you cannot access the repository, send a mail to Luca.Fiorini@cernNOSPAMPLEASE.ch To check-out a package from SVN do:
  • export SVNROOT=svn+ssh://YOURCERNUSERNAME@svn.cern.ch/reps/ificuwrepo
  • svn co $SVNROOT/Path_To/The_Package

Practical commands:

  • export SVNROOT=svn+ssh://YOURCERNUSERNAME@svn.cern.ch/reps/ificuwrepo
  • svn co $SVNROOT/PhysTools/httll/plots

  • svn update
  • svn status
  • svn add *
  • svn ci -m "blah blah"

For proof beginners:

For Condor beginners:

  • At CERN valtical cluster, users should submit condor jobs only on valtical00.
    • when you log on valtical00 with your CERN AFS account, condor environment should already be setup, you can verify this by running 'condor_status', if it does not work, send a mail to gang.qin@cernNOSPAMPLEASE.ch.
  • popular condor commands:
    • condor_status: list the status of all WNs in the condor cluster.
    • condor_q: list the current jobs in the condor queue.
    • condor_submit id.job: submit id.job to the condor queue.
    • condor_rm condor_JOBID:remove a condor job with given ID
    • condor_rm -forcex condor_JOBID: forcely remove a condor job with given ID.
  • Some simple examples:
  • SAM_condor test job example:
    • cd /work/users/qing/data5/qing/condor_test; condor_submit valtical09.job; this will submit a condor job to the machine valtical09.

cat valtical09.job:

Universe        = vanilla
Notification    = Error
Executable      = script.bash
Arguments       = HWWrel16 valtical09.txt valtical09 valtical09 0 1 1 1
GetEnv          = True
Initialdir      = /work/users/qing/data5/qing/condor_test/Results/data11_177986_Egamma
Output          = logs/valtical09.out
Error           = logs/valtical09.err
Log             = logs/valtical09.log
requirements    = ((Arch == "INTEL" || ARCH == "X86_64") && regexp("valtical00",Machine)!=TRUE)  && ((machine == "valtical09.cern.ch"))        # if you want to submit jobs to the whole cluster, remove "&& ((machine == "valtical09.cern.ch")) " from this line
+IsFastJob      = True
+IsAnaJob       = TRUE
stream_output   = False
stream_error    = False
should_transfer_files   = YES
when_to_transfer_output = ON_EXIT_OR_EVICT
transfer_input_files    = HWWrel16, list/valtical09.txt, data11_7TeV.periodAllYear_DetStatus-v28-pro08-07_CoolRunQuery-00-04-00_WZjets_allchannels.xml, mu_mc10b.root,
Queue

The job on valtical09 will be executed in a temporary directory on valtical09 like "/opt/condor-7.5.0/local.valtical09/execute/dir_8606 > ", and it runs exactly as "script.bash HWWrel16 valtical09.txt valtical09 valtical09 0 1 1 1", reading in the input files "HWWrel16, list/valtical09.txt, data11_7TeV.periodAllYear_DetStatus-v28-pro08-07_CoolRunQuery-00-04-00_WZjets_allchannels.xml, mu_mc10b.root,", like what you expect it to be executed locally.

  • If the input files are small, you can put them in the local disk and then condor will transfer them to the machine which will process the job. But if they are large, please save them to xrootd first and read them through xrootd in the job execution. Do NOT read/write to NFS (e.g., /data5) in your condor job execution. If you do and you have 30 jobs, it is like 30 people read/write to NFS simultaneously, and you know what will happen! This interferes with the NFS access requests of other people and is NOT allowed! For a similar reason, to relieve IO on xrootd, include one or just a few xrootd files (depending on the file size) per condor job. Chaining many xrootd files together per job will slow down the access. Also for a similar reason, only activate/read the tree branches that you need in your analysis, not all. You can merge the smaller output root files locally using "hadd" from ROOT.

  • At the end of the job, you can choose to save your output files in xrootd, or by default condor will copy back all newly created files (compare to before job execution) to the location where you submitted your job (Initialdir). If there is any problem during the job processing, check the outputs ( jobXXX.err , jobXXX.log, jobXXX.out ) to get more detailed error information. Read the condor menu to choose a different directory where you want to put your output files. Condor finally deletes all files in the temporary running directory when exiting.

Skimmer

The code is available here: https://svnweb.cern.ch/cern/wsvn/ificuwrepo/PhysTools/Skimmer

Example of runnning: prun --bexec "make clean; make main" --exec "./main %IN"  --inDS group10.phys-sm.data10_7TeV.00152166.physics_L1Calo.merge.ESD.r1647_p306.WZphys.101222.01.101223163805_D3PD  --outDS user.lfiorini.testskim  --outputs skimmed.root --athenaTag 16.0.3 --nFilesPerJob 1 --mergeOutput

-- LucaFiorini - 15-Mar-2011

Edit | Attach | Watch | Print version | History: r147 < r146 < r145 < r144 < r143 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r147 - 2014-03-24 - YeseniaHernandez
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback