EGamma Group Disks

Contact: Matthias Schott

EGamma Group Disks at the Grid

We have currently disk-space at the following Tier-1/2 sides:

  • CERN-PROD_PERF-EGAMMA (10 TB)
  • FZK-LCG2_PERF-EGAMMA (1TB)
  • RAL-LCG2_PERF-EGAMMA (1TB)
  • BNL-OSG2_PERF-EGAMMA (1TB)
  • GRIF_PERF-EGAMMA (1TB) (Paris area)

Usefull links and further Information

An overview talk can be downloaded in the end of this wiki.

The management of group can be done under (for this you need your grid-certificate in the browser)

https://lcg-voms.cern.ch:8443/vo/atlas/vomrs?path=/RootNode&action=execute

The accounting of our group space can be found under (be patient this takes time)

http://atlddm02.cern.ch/dq2/accounting/global%20view/30/#group

EGamma Group Disk at CERN

The EGamma pool disk at CERN has a quota of 10 TB. This space is disk space with NO tape backup. The pool disk is currently available under

/castor/cern.ch/grid/atlas/atlasgroupdisk/perf-egamma 

Remember that this is a separate pool, so you should set in the environment STAGE_SVCCLASS=ATLASGROUPDISK in order to user the right pool. If you forget to set this, your files will end up in the default pool and soon become inaccessible. If you write in the right pool, but in any other directory different than /castor/cern.ch/grid/atlas/atlasgroupdisk/perf-egamma, the files will be removed overnight, without prior warning.

To setup the access you have to login on lxplus and type

export RFIO_USE_CASTOR_V2=YES
export STAGE_HOST=castoratlas
export STAGE_SVCCLASS=atlasgroupdisk 

then you should be able to access the disk-space via usual rf-commands, i.e. rfcp, rfdir, ... The space is also available in the ATLAS Distributed Data Management System, under the endpoint CERN-PROD_PERF-EGAMMA. This means you can upload/download datasets there using DDM tools like dq2-put and dq2-get, or transfer datasets there using DDM subscriptions.

Responsible Persons

Matthias Schott (mschott@cernNOSPAMPLEASE.ch) is the group manager of the /atlas/perf-egamma group in VOMS. Contact him if you need to be granted privileges to write in the pool. This can only be done by the group manager.

Production on the Grid

If you want to reprocess EGamma data then I suggest to use currently Pathena. In the following example you see the setup at lxplus. The new created files are stored in this example not on EGamma diskspace but under your personal account. In a second step you can register these data-set to a specified EGamma-group space tier-2.

First of all we have to set things up again:

source cmthome/setup.sh -tag=AtlasProduction,15.3.0.1,32,opt,releases
source /afs/cern.ch/atlas/offline/external/GRID/DA/panda-client/latest/etc/panda/panda_setup.sh
export PATHENA_GRID_SETUP_SH=/afs/cern.ch/project/gd/LCG-share/current/etc/profile.d/grid_env.sh 
source Scripts/mSetupAthena153.sh # e.g. setting our Athena version
source /afs/cern.ch/project/gd/LCG-share/current/external/etc/profile.d/grid-env.sh
voms-proxy-init -voms atlas
source /afs/cern.ch/atlas/offline/external/GRID/ddm/DQ2Clients/setup.sh
voms-proxy-init -voms atlas:/atlas/perf-egamma/Role=production

If you want to reprocess with a new package then first download it in our current release (otherwise skip)

svn cp . $SVNROOT/PhysicsAnalysis/AnalysisCommon/InsituPerformance/InsituIDTrackPerformance/tags/InsituIDTrackPerformance-00-00-03 -m "Adjusting to new framework"

And here are some Pathena examples to then start you reprocessing on the grid (you have to be obviously in your run-directory). In this example we are reprocessing at CERN. For further Pathena information you have to consult the Pathena-wiki

https://twiki.cern.ch/twiki/bin/view/Atlas/PandaAthena

pathena PhotonReco.py --inDS mc09_valid.107040.singlepart_gamma_Et20.recon.AOD.e342_s462_s520_r729_tid075160 --outDSuser09.MatthiasSchott.single_Part_Demo_2 --nFiles=2  --cloud=CERN --tmpDir /tmp
pathena PhotonReco.py --inDS mc09_valid.107040.singlepart_gamma_Et20.recon.AOD.e342_s462_s520_r729_tid075160 --outDSuser09.MatthiasSchott.mc09_valid.107040.singlepart_gamma_Et20.recon.AOD.e342_s462_s520_r729_tid075160.photonRecovery --cloud=CERN --tmpDir /tmp

Registering Dataset on the Grid

In order to register Datasets on the grid on EGamma disk-space, you have to have the privileges (ask Matthias Schott if you need them), In the following you have a brief example:

First of all we have to set things up again:

voms-proxy-init -voms atlas
source /afs/cern.ch/atlas/offline/external/GRID/ddm/DQ2Clients/setup.sh
voms-proxy-init -voms atlas:/atlas/perf-egamma/Role=production

Register dataset which is at RAL to CERN

dq2-register-subscription -s RAL-LCG2_MCDISK --do-not-query-more-sources mc08.106020.PythiaWenu_1Lepton.recon.DPD_EGAMMA.e352_s462_r635_r672_tid064687 CERN-PROD_PERF-EGAMMA
The output should look something like that: Dataset ... subscribed (archived: 0) to CERN-PROD_PERF-EGAMMA.

Register dataset which is at FZK to CERN

dq2-register-subscription -s FZK-LCG2_MCDISK --do-not-query-more-sources mc08.105802.JF17_pythia_jet_filter.recon.DPD_EGAMMA.e347_s462_r635_r672_tid064678 CERN-PROD_PERF-EGAMMA

Register dataset which is at FZK to FZK

dq2-register-subscription -s FZK-LCG2_MCDISK --do-not-query-more-sources
mc08.008801.Hijing_PbPb_5p5TeV_MinBias.evgen.EVNT.e379_tid039186 BNL-OSG2_GROUPDISK

Putting something on local disk

dq2-put -s /tmp/mschott/group09.perf-egamma.Test.PythiaZee.recon.AOD.e323_s400_d99_r47 group09.perf-egamma.Test.PythiaZee.recon.AOD.e323_s400_d99_r4 -L CERN-PROD_SCRATCHDISK

Just for information: If you are looking for datasets, just use the usual dq2-commands, e.g.

dq2-list-dataset-site user09.MatthiasSchott.mc09_valid.107040.singlepart_gamma_Et20.recon.AOD.e342_s462_s520_r729_tid075160.photonRecovery.AOD*
-- MatthiasSchott - 21 Nov 2008
Edit | Attach | Watch | Print version | History: r4 < r3 < r2 < r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r4 - 2009-10-12 - MatthiasSchott
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback