UKI-SCOTGRID-ECDF DPM

This page describes how to access files on the Edinburgh Grid storage. It is a DPM, so it is necessary to use custom protocols to get access to the data

xroot protocol

xroot is unauthenticated, so you do not need a valid Grid proxy.

In most cases, no additional environment variables are required.

TFile* f = TFile::Open("root://srm.glite.ecdf.ed.ac.uk//dpm/ecdf.ed.ac.uk/home/lhcb/production/DC06/phys-v2-lumi2/00002093/DST/0000/00002093_00000001_5.dst")

Note the double slashes.

Using xroot and Ganga on ECDF

The SGE backend to ganga also needs the environment variables above defined, which requires you to edit your .gangarc. Make sure to change these variables to the same as in your .bashrc.

On ECDF you may need the following evironment variables:

export LD_LIBRARY_PATH = '/exports/work/ppe/lib32:'${LD_LIBRARY_PATH}':/exports/work/ppe/sw/builds'

Ganga 5

No extra things are required other than defining/configuring ganga_utils in your .gangarc . The configuration will be managed locally.

Ganga 4

Not maintained centrally so still requires stuff in your gangarc:

[SGE]
kill_str = qdel %s
submit_str = cd %s;qsub -cwd -l h_vmem=2500M -l h_rt=4:00:00 -V %s %s %s %s
preexecute = os.chdir(os.environ["TMPDIR"])
        os.environ["PATH"]+=":."
        os.environ["LD_LIBRARY_PATH"]+=":/exports/work/ppe/lhcb/lhcb-soft/lcg/external/castor/2.1.1-9/slc4_ia32_gcc34/usr/lib:/exports/work/ppe/sw/builds"
        os.environ["DPM_HOST"]="srm.glite.ecdf.ed.ac.uk"
        os.environ["DPNS_HOST"]="srm.glite.ecdf.ed.ac.uk"
        os.system("echo 'XNet.ReadCacheSize: 0' > .rootrc")

The last line is needed for Root <= 5.18, to avoid a massive memory leak by the protocol.

RFIO

In this case we will use rfio which uses grid-certificate authentication

You need a grid proxy to access the via RFIO.

RFIO Environment variables

Wherever you are running you need the following environment variables add them to your .bashrc .

export DPM_HOST='srm.glite.ecdf.ed.ac.uk'
export DPNS_HOST='srm.glite.ecdf.ed.ac.uk'

RFIO Environment variables on Local machines

You should ensure that the following environment variables are set. If you are having problems, the first thing to check is that these variables are defined.

export PPE_UI='/Disk/lochnagar0/grid/glite'
export PATH=${PATH}':'${PPE_UI}'/glite/bin:'${PPE_UI}'/lcg/bin:'${PPE_UI}'/lcg/bin/dpm:'${PPE_UI}'/globus/bin';
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}':'${PPE_UI}'/glite/lib:'${PPE_UI}'/lcg/lib:'${PPE_UI}'/globus/lib'

The following should happen automatically for you if you source the LHCb environment (typically happens automatically when you log in), but you can also add it to your .bashrc .

export LD_LIBRARY_PATH='/Disk/lochnagar0/lhcb/lhcb-soft/lcg/external/castor/2.1.1-9/slc4_ia32_gcc34/usr/lib/:'${LD_LIBRARY_PATH}
export PATH='/Disk/lochnagar0/lhcb/lhcb-soft/lcg/external/castor/2.1.1-9/slc4_ia32_gcc34/usr/bin:'${PATH}

RFIO Environment variables on ECDF

On ECDF you may need the following evironment variables:

export LD_LIBRARY_PATH = '/exports/work/ppe/lib32:'${LD_LIBRARY_PATH}':/exports/work/ppe/sw/builds'

Using RFIO when running on CONDOR or ECDF

You should have your grid proxy set up to a local temporary directory in your homespace. This way the proxy will be available at all worker nodes.

cd ~
mkdir tmp

And in .bashrc

export X509_USER_PROXY=${HOME}'/tmp/x509up_u######'

Where u###### is your grid UID, check your current grid certificate to see what this is.

Now whenever you make a proxy it will be greated in this directory which will be shared.

Using RFIO and Ganga on CONDOR

Ganga 5

Nothing is required other than setting the above variables in your .bashrc (even if you use tcshell) and getting/configuring Greigs Ganga Goodies.

Ganga 4

You need to set up the above environment variables in your .gangarc to be made on the worker node.

<<ganga 4.X>>
[Condor_Properties]

env = {'X509_USER_PROXY': '/Home/s0571079/tmp/x509up_u334387', 
    'LD_LIBRARY_PATH': '/Disk/lochnagar0/lhcb/lhcb-soft/lcg/external/castor/2.1.1-9/slc4_ia32_gcc34/usr/lib',
    'DPM_HOST': 'srm.glite.ecdf.ed.ac.uk', 
    'X509_CERT_DIR': '/Disk/lochnagar0/grid/glite/external/etc/grid-security/certificates', 
    'DPNS_HOST': 'srm.glite.ecdf.ed.ac.uk', 
    'HOME' : '/Home/s0571079' , 
    'BASH_ENV' : '/Disk/lochnagar0/lhcb/lhcb-soft/scripts/lhcb-condorsetup.sh'}

The last two env variables here are standard. Make sure to change these variables to the same as in your .bashrc.

Using RFIO and Ganga on ECDF

Ganga 5

Nothing is required other than setting the above variables in your .bashrc (even if you use tcshell) and getting/configuring Greigs Ganga Goodies.

Ganga 4

The SGE backend to ganga 4 also needs the environment variables above defined, which requires you to edit your .gangarc. Make sure to change these variables to the same as in your .bashrc.

[SGE]
kill_str = qdel %s
submit_str = cd %s;qsub -cwd -l h_vmem=2500M -l h_rt=4:00:00 -V %s %s %s %s
preexecute = os.chdir(os.environ["TMPDIR"])
          os.environ["X509_USER_PROXY"]="/exports/home/gcowan1/tmp/x509up_u320124"
          os.environ["X509_CERT_DIR"]="/exports/work/middleware/WN/etc/grid-security/certificates"
          os.environ["PATH"]+=":."
          os.environ["DCACHE_REPLY"]="".join(("eddie",os.uname()[1][4:],".ecdf.ed.ac.uk"))
          os.environ["DCACHE_CLIENT_ACTIVE"]="1"
          os.environ["DPNS_HOST"]="srm.glite.ecdf.ed.ac.uk"
          os.environ["DPM_HOST"]="srm.glite.ecdf.ed.ac.uk"
          os.system("echo 'XNet.ReadCacheSize: 0' > .rootrc")
          os.environ["LD_LIBRARY_PATH"]="$LD_LIBRARY_PATH:/exports/work/physics_ifp_ppe/lhcb/lhcb-soft/lcg/external/dcache_client/1.8.0/slc4_ia32_gcc34/dcap/lib:/exports/work/physics_ifp_ppe/lhcb/lhcb-soft/lcg/external/castor/2.1.1-9/slc4_ia32_gcc34/usr/lib/:/exports/work/ppe/sw/builds"                  

The first two env variables here are standard, and the DCACHE environment variables are shown here for completeness. Make sure to change these variables to the same as in your .bashrc.

RFIO in ROOT

To open the file:

RFIO is authenticated, so you do need a valid Grid proxy.

TFile* f = TFile::Open("rfio:/dpm/ecdf.ed.ac.uk/home/lhcb/production/DC06/phys-v2-lumi2/00002093/DST/0000/00002093_00000001_5.dst")

Note that you do not need the hostname after rfio:/. This is becase you will already have DPNS_HOST and DPM_HOST set in your environment, see above. Of course, opening a DST file in ROOT doesn't make too much sense, but it gives you an idea of what is possible.

DaVinci

Locally, the castor protocol is hacked to run RFIO with DPM. You do not need anything special in DaVinci because of this.

LHCbEdinburghGroupDataFiles

DC06 data has been stored on the DPM as is listed in LHCbEdinburghGroupDataFiles . The simplest way to access it is through ganga using Greigs Ganga Goodies

To check everything works

To check that you can see the DPM try:

dpns-ls /dpm/ecdf.ed.ac.uk/home/lhcb/production/DC06/v1r0/

Or do this for any location listed in the DPM as in LHCbEdinburghGroupDataFiles

The simplest way to check everything is through ganga using Greigs Ganga Goodies.

gu.testjob('DPM')

Monitoring

A temporary monitoring page has been set up by Greig here. For the time being it is only accessible from within EdLAN.

-- RobertLambert - 06 May 2008

Edit | Attach | Watch | Print version | History: r17 < r16 < r15 < r14 < r13 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r17 - 2008-10-24 - unknown
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback