PASTE meeting

Minutes

25 Jan 2011

http://indico.cern.ch/conferenceDisplay.py?confId=124238

Plugin for file replication

it seems it is hard coded in the API. fede says he can fix this. no need to change the web portal.

i.e. let's suppose we have 4 files: they all MUST have 3 replicas, one at cern and a second MDST at another t1, and then another replica in a DST. Now, they are all at the same site the replicas, but with the old plugin, the replicas would be distributed, every file could have its replicas in different places. for small productions, this is better .

A problem: we do not have any tool to check the total number of replicas of a LFN. A tool which systematically checks for a directory to check how many replicas are there. Philippe will do it.

We need 4 replicas, but actually it's not. Sometimes 2, other times 3.

The next patch release: a part of the new plugin. What Fede has to fix to make the API to use the desired plugin. The new plugin is in volhcb20, so it has already been tested. The problem is the big number of hot fixes. The problem is that they did NOT make a branch. The changes have now been put into head. But a fusion has to be made. Then, the patched release will be installed on volhcb20, the certification system. All the agents restarted, and if it works the release can be rolled out.

Storage usage accounting

rewritten the StorageUsage backend. Now it is done for user data, I should extend it to MC and real data.

Archiving datasets see the twiki of philippe.

For the freezing: how to select a subset of data. Maybe: for one dst per run? or .. we should compute how much space it takes. Technically, it does not make much difference if the datasets are at CERN or at another site. just more convenient if they are at CERN. Then, at production level, another replication production have to be added. Or maybe it can be included into the replication plugin. But people agree it's better to keep things separated.

Next release

Not clear which process to follow. Andrei suggest to do it in 2 stages: first DIRAC part, which does not change the functionality and then the LHCb one. In this case, we should do 2 certifications in 3 weeks. Andrei does not know what a certification procedure is. Joel does it. How long does it take? it depends: if everything works, it takes short. BUT many times problems are found! ex. for the steps. For LHCbDirac we want to include only the patches (old hotfixes) + some more things + little changes to take into account the little changes in DIRAC. Good question: the little modifications that are in DIRAC and Andrei wants to include in the patch release, are they really urgent to put in? or could they wait until the big bang release?

About the changes of functionality in LHCbDirac: this will NOT include the steps modification (that's the big change). the patch release only includes little fixes.

Very important: how to proceed with the certification strategy. There is no test suite defined. Joel and Fede are trying to set up a certification procedure. If we want to do it properly, it takes very long: every developer should write a test suite to test his code. This is NOT feasible for next release. For next release it will be partially manual. Nothing is automated. It would be very good to have a prototype in 6 months.

it's agreed to do it in one stage.

-- ElisaLanciotti - 25-Jan-2011

Edit | Attach | Watch | Print version | History: r1 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r1 - 2011-01-25 - unknown
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback