NTUPtoNTUP

Introduction

This page describes how to skim/slim NTUP (D3PD) in athena with a view to run jobs in the central production system of ATLAS.

This interface is especially for who want to run slim/skim jobs on the group (central) production system.

Any of current your scripts/codes can be implemented easily in this interface.

The packages which are used in this instruction are:

Methods

There are two different methods of skim/slim.

  • NTUPtoNTUP method (not valid, yet!)
    • It uses the interface of root reading in athena (AthenaRootD3pdReading) .
    • Main script is: NTUPtoNTUPCore/scripts/NTUPtoNTUP_trf.py
    • An example of outputs are defined in: NTUPtoNTUPCore/python/NTUPtoNTUPProdFlags.py
    • It can use may athena infrastructures for monitoring or debuging, etc...
    • Multiple jobs can be run in parallel
    • Known problem
      • CutFlowTree can't be stored
      • It can't run on recent data...
  • SkimNTUP method
    • It uses non-athena event loop.
    • Main script is: NTUPtoNTUPCore/scripts/SkimNTUP_trf.py
    • An example of outputs are defined in: NTUPtoNTUPCore/python/SkimNTUPProdFlags.py
    • With this method, you can easily migrate your root/PyROOT scripts or C++ codes
Currently, only SkimNTUP method is valid.

Please refer NTUPtoNTUPOld (see also this tutorial :SoftwareTutorialAnalyzingD3PDsInAthena) for further information of NTUPtoNTUP method

Athena setup

Please follow this instruction.

Use release AtlasPhysics-17.2.7.5.20 or later. Make test area like:

mkdir -p $HOME/testarea/17.2.7.5.20
cd $HOME/testarea/17.2.7.5.20
export AtlasSetup=/afs/cern.ch/atlas/software/dist/AtlasSetup
alias asetup='source $AtlasSetup/scripts/asetup.sh'
asetup AtlasPhysics,17.2.7.5.20, here

NTUPtoNTUPCore and NTUPtoNTUPExample are installed in the release, but current NTUPtoNTUPExample in the release has a bug, then check out the latest version:

cmt co -r NTUPtoNTUPExample-00-00-11 PhysicsAnalysis/NTUPtoNTUP/NTUPtoNTUPExample # just for references

Running example

Running example: SkimNTUP method, using job transform script

mkdir -p $TestArea/run/testSkim1
cd $TestArea/run/testSkim1
SkimNTUP_trf.py inputNTUP_SMWZFile=root://eosatlas//eos/atlas/atlaslocalgroupdisk/scratch/mkaneda/myData/mc12_8TeV/NTUP_SMWZ/f473_m1218_p1067_p1141/data12_8TeV.00209254.physics_JetTauEtmiss.merge.NTUP_SMWZ.f473_m1218_p1067_p1141_tid00960880_00/NTUP_SMWZ.00960880._000048.root.1  outputNTUP_MYSKIMNTUPFile=myTestNtup.root >log 2>&1

The script file for skimming/slimming is PhysicsAnalysis/NTUPtoNTUP/NTUPtoNTUPExample/python/skim.py, which has PyROOT function, doSkim():

  • Select branches (event information, trigger, AntiKt4TopoEM, event weight (mc))
  • Copy Meta Data Trees (TrigConfTree, CutFlowTree)
  • Skim events by EF_L1J350_NoAlg
  • Store cut flow as histograms (evnum and weight)
JobOFragment file is PhysicsAnalysis/NTUPtoNTUP/NTUPtoNTUPExample/python/MySkimNTUP_prodJobOFragment.py, which calls doSkim().

To add new type, add new definition to PhysicsAnalysis/NTUPtoNTUP/NTUPtoNTUPCore/python/SkimNTUP_ProdFlags.py.

Tutorial for SkimNTUP method (using PyROOT script)

Adding new output definition

Add your output definition at the bottom of $TestArea/!PhysicsAnalysis/!NTUPtoNTUP/!NTUPtoNTUPCore/python/SkimNTUPProdFlags.py:

class WriteMyTestSkimNTUP (JobProperty):
    """test NTUP""" 
    statusOn = True
    allowedTypes = ['bool']
    StoredValue = False
    StreamName = 'StreamNTUP_MYTESTSKIM'
    FileName = ''
    isVirtual = False
    SkimNTUPScript = "MyTestNTUP/MyTestSkimNTUP_prodJobOFragment.py"
    TreeNames = ['physics']
    SubSteps = ['n2n']
prodFlags.add_JobProperty (WriteMyTestSkimNTUP)
listAllKnownSkimNTUP.append (prodFlags.WriteMyTestSkimNTUP) 

  • MyTestNTUP/MyTestSkimNTUP_prodJobOFragment.py: package name and job fragment files which will be made in the next section
  • TreeNames: tree name of ntuple (output tree name will be the same as input, this "TreeNames" will be used when the output NTUP is used as input)
  • StreamName: Stream name. It is also used to define the argument name for SkimNTUP_trf.py. argument name is "outputNTUP_MYTESTSKIMFile" in this case.
And compile it:
cd $TestArea/!PhysicsAnalysis/!NTUPtoNTUP/!NTUPtoNTUPCore/cmt;make

Preparing new package

Make new package to construct your new ntuple:

cd $TestArea
acmd.py cmt new-pkg PhysicsAnalysis/NTUPtoNTUP/MyTestNTUP

Prepare Job fragment file

Make jobFragment file: PhysicsAnalysis/NTUPtoNTUP/MyTestNTUP/share/MyTestSkimNTUP_prodJobOFragment.py:

# This jobO should not be included more than once:
include.block( "MyTestNTUP/MyTestSkimNTUP_prodJobOFragment.py" )
# Common import(s):
from AthenaCommon.JobProperties import jobproperties
prodFlags = jobproperties.SkimNTUP_ProdFlags
from PrimaryDPDMaker.PrimaryDPDHelpers import buildFileName
from AthenaCommon.SystemOfUnits import GeV

myTestNTUP=prodFlags.WriteMyTestSkimNTUP

# Set up a logger:
from AthenaCommon.Logging import logging
MyTestSkimStream_msg = logging.getLogger( 'MyTestSkim_prodJobOFragment' )

# Check if the configuration makes sense:
if myTestNTUP.isVirtual:
    MyTestSkimStream_msg.error( "NTUP stream can't be virtual! " +
                                 "It's a configuration error!" )
    raise NameError( "NTUP set to be a virtual stream" )
    pass

## Construct the stream and file names:
streamName = myTestNTUP.StreamName
if myTestNTUP.FileName=='':
    fileName   = buildFileName( myTestNTUP )
else:
    fileName   = myTestNTUP.FileName
MyTestSkimStream_msg.info( "Configuring MyTestSkimNTUP with streamName '%s' and fileName '%s'" % \
                            ( streamName, fileName ) )

## set input tree:
from AthenaCommon.JobProperties import jobproperties
ntupFlags=jobproperties.SkimNTUP_ProdFlags
tree_name=ntupFlags.TreeName()

#do skim
from MyTestNTUP.mySkim import doMySkim
doMySkim(tree_name,fileName,athenaCommonFlags.FilesInput())

Make skimming function

Make skimming function: PhysicsAnalysis/NTUPtoNTUP/MyTestNTUP/python/mySkim.py

def doMySkim(treeName,outputFile,inputFiles):
    from ROOT import TChain
    from ROOT import TH1F
    from ROOT import TFile
    from ROOT import ROOT
    #import rootlogon
    ROOT.gROOT.SetBatch(1)

    import sys
    print "sys.argv = ", sys.argv

    if not len(sys.argv)>=2:  raise(Exception, "Must specify inputFiles as argument!")

    #inputFiles = sys.argv[1].split(',')
    inputFiles = inputFiles
    print "inputFiles = ", inputFiles

    #get main tree
    ch = TChain(treeName)
    for file in inputFiles:
        ch.Add(file)

    nEntries = ch.GetEntries()
    #print "nEntries = ", nEntries

    #*****set branches*****

    #set branch status, at first, all off
    ch.SetBranchStatus("*", 0)

    #event information
    ch.SetBranchStatus("RunNumber",1)
    ch.SetBranchStatus("EventNumber",1)
    ch.SetBranchStatus("lbn",1)

    #jet (enable all AntiKt4TopoEM)
    ch.SetBranchStatus("jet_AntiKt4TopoEM_*",1)

    #mc (enable only weight value)
    ch.SetBranchStatus("mcevt_weight",1)

    #*****set branches end*****

    #get trigger tree
    chTri = TChain(treeName+"Meta/TrigConfTree")
    for file in inputFiles:
        chTri.Add(file)

    #get cut flow tree
    chCutFlow = TChain(treeName+"Meta/CutFlowTree")
    for file in inputFiles:
        chCutFlow.Add(file)

    # Write to new file
    outFile = outputFile
    newFile = TFile(outFile, "RECREATE")

    #new tree
    ch_new = ch.CloneTree(0)

    # event counter
    cutName=["preD3PD","postD3PD","trigCut"]
    evnum=[0,0,0]
    weight=[0,0,0]

    #event selection
    for i in xrange(nEntries):
        ch.GetEntry(i)
        evnum[1]+=1
        if hasattr(ch,"mcevt_weight") \
           and ch.mcevt_weight.size() !=0 \
           and ch.mcevt_weight[0].size() !=0:
            w=ch.mcevt_weight[0][0]
        else:
            w=1
        weight[1]+=w

        for i in xrange(ch.AntiKt4TopoEM_n):
            if ch.AntiKt4TopoEM_pt[i] > 500*1000.: # 500 GeV jet
              ch_new.Fill()
              evnum[2]+=1
              weight[2]+=w
              break

    newFile.cd()
    ch_new.Write()
    #nEntriesNew = ch_new.GetEntries()
    #print "nEntriesForNewFile = ", nEntriesNew

    #check cut flow at D3PD level
    if chCutFlow.GetEntries() != 0:
        for i in xrange (chCutFlow.GetEntries()):
            chCutFlow.GetEntry(i)
            cutFlowName=chCutFlow.name
            for j in xrange(cutFlowName.size()):
                if cutFlowName.at(j) == "AllExecutedEvents":
                    evnum[0]+=chCutFlow.nAcceptedEvents.at(j)
                    weight[0]+=chCutFlow.nWeightedAcceptedEvents.at(j)
    else:
        evnum[0]=evnum[1]
        weight[0]=weight[1]

    #copy trigger meta data
    newFile.cd()
    newdir = newFile.mkdir( treeName+"Meta", treeName+"Meta" )
    newdir.cd()
    chTri_new = chTri.CloneTree()
    chTri_new.Write()

    # cut flow histgrams
    newFile.cd()
    h_evnum=TH1F("evnum","evnum",len(cutName),0,len(cutName))
    h_weight=TH1F("weight","weight",len(cutName),0,len(cutName))
    print ""
    print "******Cut Flow for ",outFile, "******"
    print "%10s %10s %12s" % ("CutName", "Events", "Weights")
    for i in xrange(len(cutName)):
        print "%10s %10d %12.2f" % (cutName[i], evnum[i], weight[i])
        h_evnum.GetXaxis().SetBinLabel(i+1,cutName[i])
        h_evnum.SetBinContent(i+1,evnum[i])
        h_weight.GetXaxis().SetBinLabel(i+1,cutName[i])
        h_weight.SetBinContent(i+1,weight[i])
    print "****************************"
    print ""
    h_evnum.Write()
    h_weight.Write()


In addition put empty file named "__init__.py" in PhysicsAnalysis/NTUPtoNTUP/MyTestNTUP/python/.

Compile and test

Compile the package:

cd ../cmt/
sed -i "s/apply_pattern component_library/#apply_pattern component_library/g" requirements # comment out library requirement for the moment
make

And test new output:

mkdir -p $TestArea/run/testSkim2
cd $TestArea/run/testSkim2
SkimNTUP_trf.py  inputNTUP_SMWZFile=root://eosatlas//eos/atlas/atlaslocalgroupdisk/scratch/mkaneda/myData/mc12_8TeV/NTUP_SMWZ/f473_m1218_p1067_p1141/data12_8TeV.00209254.physics_JetTauEtmiss.merge.NTUP_SMWZ.f473_m1218_p1067_p1141_tid00960880_00/NTUP_SMWZ.00960880._000048.root.1  outputNTUP_MYTESTSKIMFile=myTestNtup.root >log 2>&1

Check output file:

 acmd.py dump-root filtered.myTestNtup.root -t physics

Tutorial for SkimNTUP method (using C++ application)

In this section, same definition files (for MYTESTSKIM) in previous section will be used.

Please make another type (e.g. MYTESTSKIM2) if you want.

Prepare C++ code for skimming

Make skimming function: PhysicsAnalysis/NTUPtoNTUP/MyTestNTUP/src/mySkim.cpp:

#include <iostream>
#include <stdio.h>
#include <vector>
#include <vector>
#include <iomanip>
#include <stdlib.h>
#include "TChain.h"
#include "TH1F.h"
#include "TFile.h"

using namespace std;

int main (int argc, char *argv[]){
  if(argc!=4){
    cerr << "usage: skim.exe treeName outputFile inputFiles(file1,file2,file3)" << endl;
    exit(1);
  }

  TString treeName=argv[1];
  TString outputFile=argv[2];
  TString inputFiles=argv[3];

  std::vector<TString>inputs;
  char*str;
  str=strtok (const_cast<char*>(inputFiles.Data()),",");
  while (str!=NULL){
    inputs.push_back(str);
    str=strtok(NULL," ");
  }

  //get main tree
  TChain* ch= new TChain(treeName);
  for(unsigned int i=0;i<inputs.size();i++){
    ch->Add(inputs[i]);
  }

  int nEntries = ch->GetEntries();
  //cout << "nEntries = " << nEntries << endl;

  //*****set brances*****

  //set branches satus, at first, all off
  ch->SetBranchStatus("*", 0);

  //event information
  ch->SetBranchStatus("RunNumber",1);
  ch->SetBranchStatus("EventNumber",1);
  ch->SetBranchStatus("lbn",1);

  //jet (enable all AntiKt4TopoEM)
  ch->SetBranchStatus("jet_AntiKt4TopoEM_*",1);

  //mc (enable only weight value)
  ch->SetBranchStatus("mcevt_weight",1);

  //*****set branches end*****

  //get trigger tree
  TChain* chTri = new TChain(treeName+"Meta/TrigConfTree");
  for(unsigned int i=0;i<inputs.size();i++){
    chTri->Add(inputs[i]);
  }

  //get cut flow tree
  TChain* chCutFlow = new TChain(treeName+"Meta/CutFlowTree");
  for(unsigned int i=0;i<inputs.size();i++){
    chCutFlow->Add(inputs[i]);
  }

  // Write to new file
  TFile outFile(outputFile, "RECREATE");

  //new tree
  TTree* ch_new = ch->CloneTree(0);

  // event counter
  vector<TString> cutName;
  cutName.push_back("preD3PD");
  cutName.push_back("postD3PD");
  cutName.push_back("trigger");
  int evnum[]={0,0,0};
  double weight[]={0,0,0};

  // branches for event count/event selection
  vector< vector<double> > *mcevt_weight = new vector< vector<double> >;
  vector<float> *jetPt = new vector<float>;
  ch->SetBranchAddress("mcevt_weight", &mcevt_weight);
  ch->SetBranchAddress("jet_AntiKt4Truth_pt", &jetPt);

  //event selection
  for( int i=0;i< nEntries;i++){
    ch->GetEntry(i);
    evnum[1]+=1;
    double w;
    if(mcevt_weight->size() !=0 && (*mcevt_weight)[0].size() !=0){
      w=(*mcevt_weight)[0][0];
    }else{
      w=1;
    }
    weight[1]+=w;

    for(unsigned int i=0;i<jetPt->size();i++){
      if((*jetPt)[i]>500*1000.){
        ch_new->Fill();
        evnum[2]+=1;
        weight[2]+=w;
        break;
      }
    }
  }

  outFile.cd();
  ch_new->Write();
  //int nEntriesNew = ch_new->GetEntries()
  //cout << "nEntriesForNewFile = "<< nEntriesNew << endl;

  //check cut flow at D3PD level
  if(chCutFlow->GetEntries() != 0){
    std::vector<double> *nAcceptedEvents = new std::vector<double>;
    std::vector<double> *nWeightedAcceptedEvents = new std::vector<double>;
    std::vector<std::string> *name = new std::vector<std::string>;
    chCutFlow->SetBranchAddress("nAcceptedEvents", &nAcceptedEvents);
    chCutFlow->SetBranchAddress("nWeightedAcceptedEvents", &nWeightedAcceptedEvents);
    chCutFlow->SetBranchAddress("name", &name);
    for(int i=0;i<chCutFlow->GetEntries();i++){
      chCutFlow->GetEntry(i);
      for(unsigned int j=0;j<name->size();j++){
        if((*name)[j] == "AllExecutedEvents"){
          evnum[0]+=(*nAcceptedEvents)[j];
          weight[0]+=(*nWeightedAcceptedEvents)[j];
          break;
        }
      }
    }
    
  }else{
    evnum[0]=evnum[1];
    weight[0]=weight[1];
  }

  //copy meta data
  outFile.cd();
  TDirectory* newdir = outFile.mkdir( treeName+"Meta", treeName+"Meta" );
  newdir->cd();
  TChain* chTri_new = (TChain*)chTri->CloneTree();
  chTri_new->Write();
  if(chCutFlow->GetEntries() != 0){
    TChain* chCutFlow_new = (TChain*)chCutFlow->CloneTree();
    chCutFlow_new->Write();
  }

  // cut flow histgrams
  outFile.cd();
  TH1F h_evnum("evnum","evnum",cutName.size(),0,cutName.size());
  TH1F h_weight("weight","weight",cutName.size(),0,cutName.size());
  cout << "" << endl;
  cout << "******Cut Flow for " << outputFile << "******" << endl;
  cout << setw(10) << "CutName" << setw(10) << "Events" << setw(12) << "Weights" << endl;
  for(unsigned int i=0;i<cutName.size();i++){
    cout << setw(10) << cutName[i] << setw(10) << evnum[i] << setw(12) << weight[i] << endl;
    h_evnum.GetXaxis()->SetBinLabel(i+1,cutName[i]);
    h_evnum.SetBinContent(i+1,evnum[i]);
    h_weight.GetXaxis()->SetBinLabel(i+1,cutName[i]);
    h_weight.SetBinContent(i+1,weight[i]);
    cout << "*************************" << endl;
    cout << "" << endl;
  }
  h_evnum.Write();
  h_weight.Write();

  outFile.Write();
  outFile.Close();

  return 0;
}


Compile and test

Compile the package:

cd ../cmt/
sed -i "s/#apply_pattern component_library/apply_pattern component_library/g" requirements # remove comment
sed -i "s/## put here your package dependencies.../## put here your package dependencies...\nuse AtlasROOT   AtlasROOT-*   External/g" requirements  # for root

Add below line before "end_private" of requirements

application mySkim ../src/mySkim.cxx

Then,

make

And test new output:

mkdir -p $TestArea/run/testSkim3
cd $TestArea/run/testSkim3
SkimNTUP_trf.py inputNTUP_SMWZFile=root://eosatlas//eos/atlas/atlaslocalgroupdisk/scratch/mkaneda/myData/mc12_8TeV/NTUP_SMWZ/f473_m1218_p1067_p1141/data12_8TeV.00209254.physics_JetTauEtmiss.merge.NTUP_SMWZ.f473_m1218_p1067_p1141_tid00960880_00/NTUP_SMWZ.00960880._000048.root.1  outputNTUP_MYTESTSKIMNTUPFile=myTestNtup.root >log 2>&1

Check output file:

 acmd.py dump-root filtered.myTestNtup.root -t physics

Test with same condituions as the production system

In addition, please test with same settings on the production system, like:

ALERT!before run below command, please clean up your environment (i.e. re-login w/o athena setting)

rel=17.2.7
patch=17.2.7.5.20
reldir=/cvmfs/atlas.cern.ch/repo/sw/software/i686-slc5-gcc43-opt/$rel
source $reldir/cmtsite/asetup.sh $rel,notest --cmtconfig i686-slc5-gcc43-opt
unset CMTPATH;cd $reldir/AtlasPhysics/$patch/AtlasPhysicsRunTime/cmt;source ./setup.sh
cd -
export AtlasVersion=$patch
export AtlasPatchVersion=$patch
exe=$reldir/sw/lcg/external/Python/2.6.5/i686-slc5-gcc43-opt/bin/python
trf=$reldir/AtlasPhysics/$patch/InstallArea/share/bin/SkimNTUP_trf.py
input=root://eosatlas//eos/atlas/atlaslocalgroupdisk/scratch/mkaneda/myData/mc12_8TeV/NTUP_SMWZ/f473_m1218_p1067_p1141/data12_8TeV.00209254.physics_JetTauEtmiss.merge.NTUP_SMWZ.f473_m1218_p1067_p1141_tid00960880_00/NTUP_SMWZ.00960880._000048.root.1
output=myTestNtup.root
log=./log

$exe $trf inputNTUP_SMWZFile=$input  outputNTUP_MYTESTSKIMNTUPFile=$output >$log 2>&1

This will not setup such "TestArea" variables.(maybe same as just doing "asetup 17.2.7.5.20" ...?)

Instruction for installing your NTUP to run on the grind

ALERT! To avoid large overlaps, it is recommended to make your small NTUP less than 10 % of original NTUP.

Install your package to atlasoff svn

  • Ask conveners of your group to approve new package and D3PDs
  • Ask to create new package to svn managers (see here)
    • You can put new packages under /PhysicsAnalysis/NTUPtoNTUP
    • Package name can be like EgammaN2N

Add new NTUP definition to SkimNTUP _ProdFlags and AMI data base

* Ask managers of NTUPtoNTUPCore (Michiru) to add your D3PD definition to NTUPtoNTUPCore package.

TIP If you plan to isntall several formats in your package, plesae make your own prodFlags file in python directory, like EgammaN2N/python/EgammaN2NProdFlags.py, then ask managers to add your prodFlags file in NTUPtoNTUPCore.

This arrow you to add new types in your packages w/o notifications to NTUPtoNTUPCore later.

* Ask Karsten to add your NTUP to AMI database.

Please refer here for details.

Create tag of your package and test with nightly

Create tag of your pacckage.

Then check with AtlasPhysics nightly (17.2.X.Y.Z or current nightly if there is)


ALERT! Please confirm that there are no compile errors, follow this instruction

Install it in AtlasPhysics

Then, ask managers of NTUPtoNTUP (Michiru) and AtlasPhysics cache manager (Matthew) to put the package in AtlasPhysics nightly


Dear everyone,

the DPD production system has, by and large, been working well recently (so thanks to you all for your efforts), but we think that there's some room for improvement which will lead to more stable caches and efficient cache production. Hence we have a few suggestions:

- Firstly we would like groups to start collecting their own tags into TagCollector and to fill in the "Justification" box while they are doing this. Of course, if people need help to do this, we can help with the tag collection too.

- It's mandatory that the groups test their own tags, so once the new tags have gone into a validation nightly, we would also like people to check that everything is working as expected and to tick the "Passed Developer Tests" box in Tag Collector.

- If any changes are made to core packages then the q125 tests should be passed.

- The DPD team are currently working on some runtime tests that can be used once a cache has been built. At present this will just entail the q125-style tests to check for runtime errors, but on a slightly longer term Gustaaf is setting up RTT-style tests for the DPD production.

These suggestions may slightly slow down the production of each individual cache, but it will help to ensure that each cache is more stable. Hence we'll reduce the number of new caches that we have to build and the number of DPD productions that have to be launched. At times of urgency we can still be flexible though. We'll discuss these suggestions in next Monday's meeting (30/07/12), but we would like to start implementing them from the start of next week. If you have any comments, please let us know.


Cheers,

Matthew (on behalf of the DPD team)





Hello everyone,

after the long delays in making the 17.2.7.4.1 AtlasPhysics cache in the last week we have been discussion ways that we could improve the collecting of tags in future. In particular, we were worried about the effect of any changes on other developers and about communicating these changes to each other. Due to this we suggest a few changes for the Monday meeting:

- the developers should contact me by the lunchtime before the Monday meeting with any new tags that they expect to be included that week
- they should state what the changes are to the packages and whether these are expected to affect other developers
- I can then summarise this as part of the slides that I show in the Monday meeting

We can then use this extra information to help set the priorities for cache building that week and these tags will be the ones considered for inclusion in the cache that week. Any other "emergency" tags may also still be included, but again only with a full description of the effect on other developers. Hopefully this will avoid the problems that we've had in the last week, where it took a long time to settle on a stable cache.

If anyone has any questions or comments, please let me know. We can also discuss this further in the meeting on Monday .


Cheers,

Matthew (on behalf of the DPD Coordination team).


Check nightly

Normally, new nightly is made in the next morning.

Check 17.2.X.Y.Z-VAL

and once your update is included in new nightly, try with nightly w/o any modification (w/o any local compiled packages).

setup:

asetup AtlasPhysics,17.2.X.Y.Z-VAL,latest_copied_release,builds

It's mandatory that the groups test their own tags, so once the new tags have gone into a validation nightly, we would also like people to check that everything is working as expected and to tick the "Passed Developer Tests" box in Tag Collector.

Make new AtlasPhysics cache

New cache is made at short interval.

If you want new AtlasPhysics cache as soon as possible,

you can ask AtlasPhysics cache manager (Matthew).

Make tag

Check with DPD production team

Production request

Send requests (NTUP name (NTUP_XXX), tag (pXXX) and full name of datasets) to your group's Production Coordinators.

For on going data, you can ask open ended requests. (your NTUP jobs will be submitted automatically when new data available.)

FAQ

How to distinguish data11 and data12?

One method is to use "preExec" or "preInclude" arguments.

Ref: Reco_trf

These arguments are available for NTUPtoNTUP _trf.py and SkimNTUP _trf.py, too.

Example of preExec Usage

Ref: JetN2N

* Prepare python/JetN2NFlags.py to have a property class of such JetN2NYear (str) and set default is 2012 (storedValue = '2012' or anything like this). * Add following lines in SlimSMQCDNTUP _prodJobOFragment.py

#do skim
from JetN2N.SlimSMQCD import doSlimSMQCD
from JetN2N.JetN2NFlags import JetN2NFlags
doSlimSMQCD(tree_name,fileName,athenaCommonFlags.FilesInput(),JetN2NFlags.JetN2NYear())

* Update doSlimSMQCD function to allow 4th argument 'year' to use this 'year' property in doSlimSMQCD. * Such a property can be changed from arguments to SkimNTUP _trf.py like:

   $ SkimNTUP_trf.py preExec='from JetN2N.JetN2NFlags import JetN2NFlags;JetN2NFlags.JetN2NYear.set_Value_and_Lock("2011")' inputNTUP_JETMET=...

* This argument also can be set in the production system. Need to ask new p-tag different from normal tag for 2012.

How to check if an input file is data or mc ?

One example is to check if the sample has "mc_" branches.

Ref: ExoticsMultiJetSkim.css

How to use input files?

Example for python script to add a text file (data.txt) in EgammaN2N:

* Make the directory "data" (or as you like) in the package and put "data.txt". * Add a following line in cmt/requirements:

apply_pattern declare_runtime extras = "../data/*.txt

This will make a link to data/*.txt in InstallArea/share/ (please make sure that the names are uniques in athena, such "EgammaN2N_" prefix may be good.). * Call these files from your scripts like:

from AthenaCommon.Utils.unixtools import FindFile
dataPathList = os.environ[ 'DATAPATH' ].split(os.pathsep)
dataPathList.insert(0, os.curdir)
inputFileName= FindFile("data.txt",dataPathList,os.R_OK)
inputFile=open("data.txt",'r')

or in C++ code:

#include <stdlib.h>
#include <string.h>
#include <fstream>
...
  char* paths = getenv("DATAPATH");                                                                                                                                              
  const char* sep = ":";                                                                                                                                                   
  char* path = strtok(paths,sep); 
  std::string filePath;                                                                                                                                                                                                                                                                                               
  std::ifstream inFile;                                                                                                                                                         
  while (path != NULL){                                                                                                                                                    
    std::string filePathTmp = std::string(path)+"/data.txt";                                                                                                                              
    inFile.open(filePathTmp.c_str());                                                                                                                                         
    if(! inFile.fail()){                                                                                                                                                   
      std::cout << "found: " << filePathTmp << std::endl;
      filePath=filePathTmp;
      inFile.close();
      break;                                                                                                                                                
    }                     
    inFile.close();                                                                                                                                                 
    path = strtok(NULL,sep);                                                                                                                                               
  }
  // then, use filePath 
  ...

Old instruction

NTUPtoNTUPOld


Major updates:
-- MichiruKaneda - 05-Apr-2012

Topic attachments
I Attachment History Action Size Date Who Comment
Unknown file formatcxx MyTestZeeAlg.cxx r2 r1 manage 4.9 K 2012-04-17 - 15:54 MichiruKaneda  
Header fileh MyTestZeeAlg.h r2 r1 manage 2.3 K 2012-04-17 - 15:54 MichiruKaneda  
Unknown file formatext requirements r1 manage 0.9 K 2012-04-17 - 11:46 MichiruKaneda  
Edit | Attach | Watch | Print version | History: r43 < r42 < r41 < r40 < r39 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r43 - 2014-03-26 - MichiruKaneda
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback