TMD Recipes
Ntuplizing multiple datasets at FNAL (OpenHlt)
- create a working area in a CMSSW release (357 works)
- checkout the scripts and templates from cvs
cvs co UserCode/TMD/scripts
- edit crab.cfg.template and modify the
storage_path
to point to your dCache area (resilient/yourusername/yoursubdirectory
)
- pick a run you want to ntuplize (run list : cmswbm, https://cmswbm.web.cern.ch/ , run summary )
- choose which datasets you want to ntuplize
getDS.sh contains a dbssql commands that queries for ALL the DS available, you can run the script separately
./getDS.sh ${RUNNUMBER}
. This produces two outputs : one,
dbsql_rrunnumber
, with the full list of available datasets for the run you chose, second,
brewInput
, contains the list of dataset to be ntuplized. You can modify this list by editing
getDS.sh
:
for dataset in Cosmics Commissioning EG EGMonitor JetMETTau JetMETTauMonitor MinimumBias Mu MuMonitor; do
->for dataset in [your list of datasets]; do
./makeJobs.sh ${RUNNUMBER}
This runs
getDS.sh
, then
brew.py
, and creates a subdirectory called r${RUNNUMBER} where you should then submit the jobs :
cd r${RUNNUMBER}
submitJobs.sh ${RUNNUMBER}
This does NOT submit the jobs, just prints you the commands you have to copy and paste.
- when the jobs are done, you can copy them to CERN : edit
copyFilesToCern.sh
, change resilient/lucieg/Commish2010
to resilient/yourusername/yoursubdirectory
and user/l/lucieg
to user/u/username
and run :
./copyFilesToCern.sh ${RUNNUMBER}
Troubleshooting :
--
LucieGAUTHIER - 10-Jun-2010