Day
| Time
|
|
|
Mon
| 14:00
| Topic
| Workshop on XML detector description
|
References
| Agenda, minutes, slides: HTML
|
Tue
| 09:00
| Topic
| Framework tutorial
|
References
| Agenda, slides: HTML
|
Wed
| 09:05
| Topic
| C. Tull: Welcome
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- Welcome to Berkeley
- Dinner
- E-mail facilities: 51L after Wednesday noon
|
Wed
| 09:15
| Topic
| N. McCubbin: Introduction, workshop agenda
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- Welcome to software workshop, thanks to hosts
- Topics: Architecture (tutorial, session), world-wide computing incl.
Grids
- Good start with XML detector description workshop and framework
tutorial
|
Wed
| 09:25
| Topic
| F. Gianotti: Physics aspects: Introduction, milestones
|
References
| Slides: HTML,
PS,
PDF
|
Summary
|
- MC: Generators being interfaced to HepMC++ and to framework
- Atlfast: Full OO design going on; milestone: prototype ready and
interfaced to framework autumn 2000, by the same time old version
available in framework; hence by autumn 2000, physics groups can run
MC generators and fast simulation in new environment
- Deadline: Atlas week February 2001, physics workshop June 2001 (full
validation)
- Physics validation of Geant4: Comparison of test beam data with G4
data ongoing, revealing some problems that are being investigated,
profiting from huge experience in Atlas. Most problematic area:
hadronic interactions at low energy. Dedicated meeting in May 2000 at
CERN. Deadlines: G4 physics workshop in October 2000, physics workshop
in June 2001, full validation (= replace Geant3) by end 2001
- Requirements on simulation: G3 and G4 simultaneously available with
same geometry to allow for direct comparisons; Fluka interfaced to
Atlas geometry (project started)
- Intermediate simulation (perhaps with shower parametrisation)?
- Reconstruction: new reconstruction in framework by end 2000; by that
time scale access to data in Objectivity required, then combined
performance groups can start to use reconstruction in new framework
and validate the Physics TDR results. Aim is for full validation by
end 2001
- 2001: MDC 0 (test of full functionality of the full chain); end 2002:
MDC 1 (0.1%), physics contents to be defined (low or high luminosity);
2003: MDC 2; 2004: 'Readiness document'
|
Discussion
|
- Q: Is there room for a 10M jets test? A: Depends crucially on how
the trigger will be simulated
- Q: Will distributed computing be exploited for the mock data
challenge? A: Yes, for MDC 1 and MDC 2, but events will not be split
|
Wed
| 9:50
| Topic
| I. Hinchliffe: MC generators and HepMC++
|
References
| Slides: HTML,
PS,
PDF
|
Summary
|
- Each generator exists as external package in repository. Aim is to
find one person per generator/package for maintenance. General purpose:
Herwig, Isajet, Pythia; special purposes: Phojet, higher order QCD
processors
- Standard interface: HepMC, also between generators and decay packages.
Need StdHep for old Fortran generators, to exist for a few years at
least
- HepMC (Matt Dobs and Jorgen Beck Hansen): Intended as replacement for
HEPEVT common block, supports modularisation of generators, event
described as generic tree with particles and vertices, fairly small,
provides iterators, depends on STL and CLHEP, is in repository, has
its own particle data service for now. Questions: will other
experiments use it? Could we ask Geant4 to use it or to provide an
interface to it?
- Difficult to find people for (trivial) maintenance tasks which are
considered not rewarding. Consequence is private copies everywhere,
work gets done over and over again, we are still using very old
versions, packages become orphans
- Integration: output from all generators in common format; use one
generator for high-pt and another one for minimum bias; read events
from file or generate them on the fly; set parameters at run time;
write selected parts of event. Class diagrams for generator modules
- Status: EventService for generating events to come very soon,
single particle gun, Isajet done, Phojet almost done, Pythia requires
more work (estimate 15 June 00), Herwig (estimate September 00).
Generator level filter example to come, waiting for framework to
provide event persistency
- Actions: integration in cvs repository (where?), integration into
releases, particle data services
- Open issues: Random number management and storage (common service for
all generators), integration of special purpose decay MC; what to do
with K_S and Lambda? Probably decay in generator
|
Discussion
|
- Q: If HepMC is general and not confined to Atlas, it should not
inherit from ContainedEvent. A: It does not - a different way has been
found
- Q: What's the issue about the particle class? A: Full access to PDG
data aimed for. Will be looked at by A-team
- Q: Very important to provide merging of events at different levels.
A: Fully being considered
- Q: How difficult would it be to interface HepMC to Geant4? A: Not
expected to be a major operation
|
Wed
| 10:15
| Topic
| P. Clarke: Status of ATLFAST integration into Gaudi
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- History: re-engineer Atlfast++ into OO, remove Root dependencies,
standalone, later integration into Gaudi
- After last A-team meeting: Full integration into Gaudi immediately,
current Atlfast++ functionality as starting point (very little design
work to do)
- Reminder of Atlfast++ structure
- Working decisions: keep basic concept of Makers, each of which will
become a Gaudi algorithm; each maker to be re-designed into OO.
De-couple detector simulation, reconstruction, and histogramming, and
remove knowledge of makers about each other (communication via TDS).
Keep output in same structure
- Will make development of new makers much easier
- Histograms and user analysis are just considered other algorithms in
Gaudi
- Status: task now completely defined, example algorithm (electron maker)
written, need to get it to run in Gaudi, then more algorithms
(CellMaker, ClusterMaker, MuonMaker) to follow
- Work in parallel (perhaps, but not necessarily, for September): More
complete long term user requirements collection
- For each maker: upgrade to include improvements of Fortran versions;
add knowledge from other users (preferably by enabling them to do it
themselves)
|
Discussion
|
- Planning matches the physics coordination timescale
- Q: What do you plan to set the many parameters in Atlfast? Do you
consider XML? A: For the moment, there are "hardwired" defaults, and
the setting would be done via the jobOptions file
- Q: Would it be possible to use other offline jet finders than the one
now in Atlfast? A: Yes, but this may require some work, not clear
for September
- Q: Why did you not start from the C++ version not coupled to Root?
A: It was lacking quite some physics functionality as compared with
the Root version
- Q: Does the separation of generation / reconstruction / histogramming
have any performance implication? A: This will be studied, and
potential performance problems will be addressed then
|
Wed
| 10:50
| Topic
| K. Amako: Status of Geant4 physics validation
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- Validation project has just started, in simulation session on Thursday
there will be a few related talks
- Two related activities: Collaboration project with G4 team, internal
activities in Atlas
- Collaboration with G4 team: G4 calling for 'expression of intent', aim
for comparison projects with experimental groups which would be the
prime projects for 2000. Constraints: projects last 6 months at most,
only few activities at a time, avoiding duplication but covering most
aspects relevant to the experiments. Atlas interested (EM barrel, Had
endcap and forward, Had barrel), tests had started in Atlas
independently. Significant manpower involved. Atlas requested that
these projects must not prejudice other areas where action by the
Geant4 team is needed
- Other experiment replies: ESA (low energy proton scattering), BaBar
(dimuon data to study delta ray distributions and dE/dx), CMS (tracking
detector, hadronic calorimeter)
- No official reply yet from Geant4; issues to be discussed: how to share
responsibilities, when does the project start, how to kick off the
collaboration, how to proceed, ...?
- Internal activities in Atlas: G4 comparison/validation started in most
subsystems (thanks to A. dell'Acqua's courses), problems reported to
Geant4 team which were considered valuable input. Subsystems'
activities independent from each other so far
- Geant4 physics meeting in Atlas proposed (geometry, results of
comparisons with Geant3 and test beam data, which test beam data
exist, ...) First meeting to take place on May 18 at 09:00 h at CERN
|
Discussion
|
- Q: Is the XML detector description work being discussed somewhere?
A: The meeting of May 18 should concentrate on Geant4 validation
- Q: Are there efforts to design a G4 application for Atlas? A: Will be
discussed in the simulation session
- Q: Will the comparisons of G3/G4 geometries be a topic in the meeting?
A: Yes
|
Wed
| 11:10
| Topic
| RD Schaffer: Status of digits and hits from Geant3
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- Global planning: G3 digits available April 2000, G3 hits available
July 2000
- Digits: classes defined for digits and containers, definition of
identifiers, minimal amount of detector description information in
order to make sense of the data. User testing needed
- ID: digits available since early 1999 (pixel/SCT being checked now).
LAr EM, HEC available since mid 1999, FCAL only partially done
(awaiting tube to channel mapping). TileCal available since 0.0.41,
being checked. Muons: MDT (since 1999), RPC (since 0.0.41), checks
in reconstruction under way; TGC, CSC remain to be done. High priority
for FCAL (Simion, Schaffer) and TGC (Goldfarb et al)
- Hits: two elements: Hits classes and containers, hits to digits
transformation. Hits classes and containers straight-forward.
ID: Schaffer, Bentvelsen(?), LAr (Schaffer, Simion), TileCal
(Solodkov), Muons (Goldfarb et al). End July milestone looks reasonable
- Hits to digits transformation to be organised in subsystems, should
be done after migration of hits and digits to full-fledged TDS
objects. Both items to be taken note of in the plan. Migration after
the September release?
- Pileup requirements to be discussed - should something special be done
for G3, or should we wait for general hits and digits?
|
Discussion
|
- Q: For FCAL, do we want x/y or r/phi geometry? A: x/y. Should be done
by June
- TGCs will take a couple of months still. Limiting factor is the
detector description, some shortcut perhaps possible
- Taking new people on board takes a long time because of the required
training
- The purpose of G3 hits is to do pile-up; we need the facility to
produce pile-up of G3 hits and digitise afterwards
- Time and memory consumption for pileup is a critical issue
- Every effort will be made to meet the milestones
|
Wed
| 11:25
| Topic
| D. Barberis: Software related activities of muon and
b-tagging groups
|
References
| Slides on Muon reconstruction at Level 2:
HTML,
PS,
PDF
Slides on b tagging at Level 2:
HTML,
PS,
PDF
|
Summary
|
- Muon reconstruction at Level 2: Two algorithms being developed,
combination provides for powerful selection and rejections across
all eta ranges
- B-tagging at level 2: start from jet ROI, select Pixel hits compatible
with tracks. Asymptotic IP resolution: ~ 30 microns. More difficult at
high luminosity
- Primary vertex search at high luminosity: histogramming method
|
Discussion
|
- Q: How are these muon algorithms implemented? A: Fortran. Q: Are there
plans to migrate the algorithms to OO/C++? A: Yes, this is even
unavoidable in the trigger environment. It has partly been written in
C++
- Q: Are the 23 microseconds per ROI for the muon alone, or for muon
plus ID? A: Not known
- Q: Is the offline efficiency improvement also obtained for high
luminosity? A: Yes
|
Wed
| 11:50
| Topic
| M. Bosman: Software related activities of e-gamma and
jets-etmiss groups
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- Reconstruction: See the
reconstruction page, with the
plan and the
entity list
- Jet reconstruction: basic steps: Preparation of input, jet finding
(jet finder library), jet energy calibration (experimental aspects,
physics aspects), jet identification (b-jet, tau-jet etc)
- Preparation of input: treated in LAr and TileCal systems, to be
presented later this week. Jet finding: new OO code being written,
comparison with old software foreseen. Alternative sliding window
algorithm being studied, too; activities on jet finder library
starting. Jet energy calibration: algorithms exist in old framework,
volunteers wanted to re-implement these in OO/C++ in new framework.
Energy flow to be done from scratch. Tau-jet identification: algorithm
exists in Fortran, volunteer needed for OO re-implementation
- Et miss: Prepare input, Et miss reconstruction. Prepare input same as
for the jet reconstruction. Again algorithms in Fortran exist,
volunteers needed for re-implementation
- E/gamma: Prepare input in the hands of systems, available by September.
e/gamma identification: Some algorithms need further evolution,
authors of Fortran code have been asked to contribute to OO
implementation
- Calibration procedures: physics processes identified, thoughts ongoing
about technical infrastructure
- Simulation: most work going on in LAr and TileCal systems. G4
validation considered very important. Shower parametrisation starting,
volunteers needed to obtain a detailed understanding
- Other aspects treated in working groups: Physics studies, e/jet
separation etc serving as benchmarks for studies of new software. Aim
is to massively develop and use the new code
|
Discussion
|
- Q: Was it considered to include tracks into the jet finding? A: Not
yet, but this will happen in the Energy Flow effort
- Q: How many people would be needed to satisfy all requirements for
volunteers? A: Largest problem is in Et miss, in other areas volunteers
are in sight. One or two people would make a big difference
|
Wed
| 14:00
| Topic
| Data base, event, detector description
|
References |
Minutes: |
Plain text |
Slides: |
D. Malon: Atlas database work plan:
HTML,
PS,
PDF,
PPT |
|
RD Schaffer: Discussion on notes for Id's, hits and digits:
HTML,
PS,
PDF,
PPT |
|
S. Bentvelsen: Status of the Atlas detector description:
HTML,
PS,
PDF,
PPT |
|
S. Goldfarb: Muon database status and plans:
HTML,
PS,
PDF,
PPT |
|
RD Schaffer: Objy infrastructure and activities:
HTML,
PS,
PDF,
PPT |
|
Wed
| 16:30
| Topic
| Architecture
|
References
|
|
Thu
| 09:00
| Topic
| N. McCubbin: General news, status
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- Not covered here in detail: architecture, platforms, CERN review
of LHC computing
- Architecture: tutorial was a big success, repeat at CERN already
fully booked. Review May-prototype based on experience, assess
strategic approach
- People: Lassi Tuura has left, Chris Onions to leave in September.
Leaves problems, among others, of the training coordinator, and the
SRT maintenance
- Katsuya Amako nominated simulation coordinator, will be based at CERN
for a year soon. David Quarrie nominated Chief Architect
- Main occupation of CSG has been the plan. Version
available
on the Web
for comments. Milestones to be put in official Atlas milestone system
after June CSG meeting. Early version to be shown to software panel of
LHC computing review, mainly to demonstrate progress
- CSG has discussed Fortran90 request from Saclay, recommendation to
follow
- Technical group issue: Repository policy confirmed: allow maximum
flexibility for developers, early tests (nightly builds), constraints
by required reproducibility. Test area closure confirmed for June 1.
Medium to long term future of Atlas release tool being discussed
- Software agreements and MOUs: variously discussed. Aim is formal
commitment for a software deliverable or contribution of effort,
similar as for hardware. Distinction between core software, physics
oriented software, analysis software. Only the first two covered by
(different) kinds of agreement. Details of this distinction still to
be clarified. See Policy
document, being discussed in NCB right now. Trying to pilot this
with the framework
|
Discussion
|
- Q: SRT review: Is this review to address code distribution as well?
A: Yes, but not with the highest priority. It is a formal requirement.
Q: This should really be considered high priority
|
Thu
| 09:35
| Topic
| H. Drevermann: Atlantis
|
References
| N/A
|
Summary
|
- Since talk of last year: Now more events available with even more
hits, with Atlantis a bug was found in the data transmission
- Some people of UCSC joined, likely to maintain Atlantis for some time
to come
- Atlas: enormous hit density in a small spatial region (much smaller
than central Aleph tracking chambers, for example)
- Aim of event display: check pattern recognition. Adequate projections
required for maximum usefulness
- Candidates: phi/rho, phi/z, rho/z, phi/eta. All fine for Pixel and
SCT. TRT barrel only phi/rho, TRT endcap only phi/z. Hence hardware
construction suggests adequate projections
- Rho/phi is good alternative to x/y to de-populate the central region
and zoom into it. Compression in rho helps find steep tracks
- Phi/theta makes pattern recognition very simple
- 3-dimensional V plot: combining phi/rho and phi/theta. Tracks are
V, opening and direction indicate momentum and charge, respectively
- For V plot, z position of vertex is required. Change by 1 mm makes a
considerable effect. In the beginning: z found empirically. Not feasible
with pileup events. Combination of all hits to pairs or triplets,
extrapolating to rho=0 and histogramming delivers a good z position
information
- V-plot still far too much populated for pile-up event, although some
tracks can be found. Density distribution (layers in phi versus eta,
good for barrel and endcaps) can be used to suppress hits which do not
belong to tracks. Density distributions take rounding effects into
account. Low momentum tracks (< 1 GeV) are lost this way
- Recovery of low momentum tracks: Perigee parameters (phi', eta'
chosen such that they are constant). Different parameters select
different momentum regions
- Can be used for comparisons of different tracking algorithms
- Event with two jets and two muons identified correctly
- Some low-momentum isolated muons (1.6 GeV) not passing the filter,
due to wrong z (gap between stereo angle layers)
- Pattern recognition check on a track by track basis: direct comparison
(via picture), or any algorithm against filtered picture
- Data flow: Atlantis reads XML files written by Paso/Gaudi or any other
program, interim storage before Atlantis works with the event. Direct
event transmission from other programs implemented, too
- Next: include calorimeter data (complex task - ECAL started), muon
detector data (first ideas), full tracks and track extrapolation,
visualisation of secondary vertices (Dali as starting point),
momentum space rather than real space, 4-momentum calculations and jet
finding
- Gary Taylor started to migrate the Fortran code to Java
- Hans available for hands-on and discussion in the Perseverance Hall
today and tomorrow afternoon
|
Discussion
|
- Q: What about the inhomogeneous field? A: Data are required. The V
plot will survive (Vs will just be a little curved), but there are
questions about the filter. If there are significant changes in rho-z,
effect may be more difficult
|
Thu
| 11:15
| Topic
| H. Meinhard: The Atlas Computing PLAN
|
References
| Slides: HTML,
PS,
PDF,
PPT
PBS: PDF
Gantt chart: PDF
Task report: PDF
|
Summary
|
- Motivation: external (CERN review of LHC computing, LHCC, Executive
Board/TC), internal (understand project, check timing, software
agreements and MOUs)
- Basis: Project Breakdown Structure (= work breakdown structure),
governs schedule
- Outline of PBS
- Schedules using Microsoft Project: Gantt charts, task reports (the
latter showing explanatory notes)
- Planning until end 2001 much more detailed. Two major cyles identified
(one until end 2001, one until end 2003). Driving events: TDRs, MDCs,
MOUs
- Work ongoing, current version incomplete, probably contains
inconsistencies and errors
- Next: dependencies, resources, identify and classify milestones,
implement follow-up
- Help of everybody needed to fully benefit from the work
|
Discussion
|
- Q: Make sure responsible persons can maintain their part of the
schedule themselves. A: That is foreseen
- Project 2000 may help with some of the technical aspects mentioned
|
Thu
| 11:50
| Topic
| G. Poulard: Technical Group Issues
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- Presenting slides prepared by Maya Stavrianakou
- Repository policy: Policy document updated
- External software: inside the repository (sometimes only as stub
pseudopackage for SRT control), outside
- Platforms: RedHat 5.1 phased out, 6.1 now default. AIX dropped earlier
this year. Digital and HP under discussion
- Releases: fortnightly developer releases, nightly builds of head; some
improvements being worked on
- Production release 1.0.0: not tested as extensively as hoped for
- Release tools: maintenance problem due to departure of Lassi (same for
cvs server, atlas-sw web server etc)
- Technical Group issues: documentation for users and developers,
support for Java being addressed; forwarded to other bodies: naming
conventions, garbage collection, setup scripts, exceptions
|
Discussion
|
- Q: Is it the intention that one of the supported platforms is actually
available at CERN? A: There is no explicit policy, but this has been
the implicit assumption. The easiest way to exploit these platforms is
via the Web interface currently provided by
http://atlas-sw.cern.ch
- Q: We need to be a little more proactive solving the maintenance
problems. A: Yes, that is understood
- Q: For Solaris, are some packages still turned off? A: Yes, this is the
case, part of which is due to the Fortran90 issue. Q: What is the plan
to move to the new C++ compiler on Solaris? A: Atlas has not made an
official request so far, but it is understood that our basic acceptance
tests have been completed successfully. Q: Solaris 2.7 is very
different from 2.6, to be considered as a different platform
|
Thu
| 14:00
| Topic
| Simulation
|
References
|
Minutes: |
N/A |
Slides: |
K. Amako: Introduction:
HTML,
PS,
PDF,
PPT |
|
M. Leltchouk: EM barrel status |
|
R. Mazini: HEC and FCAL status |
|
G. Poulard: Management of Geant3 geometry:
HTML,
PS,
PDF,
PPT |
|
T. Hansl-Kozanecka: Comments on the work plan for simulation:
HTML |
|
K. Amako: Discussion on future organisation of activities:
HTML,
PS,
PDF,
PPT |
|
Fri
| 09:00
| Topic
| T. LeCompte: Test beam requirements and requests
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- Two purposes: detector development, calibration. Different time scale
for interest in data, fast analysis required in both cases. Data
volume typically not large
- Detector development ongoing, working in "standalone mode". Try and get
them to use the framework
- Calibration will begin soon (TileCal in July), data must be accessible
for the lifetime of Atlas. Calibration and run conditions data base
required. Unlikely to need a pp event and an old calibration event in
memory at the same time
- TileCal experience with Objectivity: Wrote data in Zebra format, later
convertion into Objectivity. Want to use the same analysis program
both online and offline. Speed: expect 3 kHz for Tile, Objy
benchmarked at 75 Hz, but measurements non-conclusive. LAr finds Objy
too heavy, request Root I/O. Not obvious that Objy can't be made to
work
- Tile plans and requirements: need to move from Zebra to Atlas-wide
format. Objy does not seem to be the right solution at DAQ level.
Access to data with the same application as pp data. Need database for
calibration and run conditions. Urgency, since data will arrive in a
few months
- Other requests: Pixels: support for adapting their code to the
framework, would like to store their data at CERN. Overall: lots of
interest in comparing Geant3/Geant4 data with test beam data - chicken
and egg problem
|
Discussion
|
- Appeararance of data in application programs is probably more important
to be stable than the physical storage of the data - nonetheless
people should be prepared to re-write code
- RD45/LHC++ is working on the infrastructure for a persistent
calibration / run conditions data base
- Need to determine a complete set of milestones for the detector
description and the event
- Q: Will you work on resolving the inconclusive benchmarking results
about Objy? A: Only as far as is necessary to obtain a performance
sufficient for the TileCal test beam data taking
|
Fri
| 09:20
| Topic
| D. Malon: Database technologies for the Atlas
Testbeam Software Initiative
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- For work program of the data base group, see talk on Wednesday
- Infrastructure: Federation creation, replicas, release tools; schema
management; database id management
- RD45 tools: Session management; naming, user-owned databases;
Conditions database infrastructure, HepODBMS clustering
- Connections to control framework and architecture
|
Discussion
|
- Milestone: Oct 2000: Limited persistency in Athena
- Q: Why can't we use the RD45 calibration / run conditions
infrastructure right away? A: There are still issues about the
versioning, support infrastructure still missing in detailed areas.
We should probably use it as a starting point
- Q: LAr would like to store data into Objy by October 2000, requires
that Objy access from the framework is available beforehand. A: See
David Quarrie's talk
|
Fri
| 09:30
| Topic
| D. Quarrie: Architecture/framework/analysis aspects
of testbeam work
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- Input from event generators and physics TDR, sequences and filtering
algorithms, output of histograms and N-tuples. Obviously not yet
adequate for testbeam activities. How quickly can we get there?
- Aim to integrate into TileCal testbeam work, input from Objy, mapping
of detector-centric rather than event-centric view (work starting soon,
timescale for availability not clear, ideally for July/August TileCal
testbeam run)
- LAr requests: Bookkeeping, persistent I/O (Objy, RootIO, timescale
September for store/retrieve), detector data store, reconstruction by
December 2000
- Core for reconstruction and analysis available now; major lack is
persistency (work starting)
|
Discussion
|
- Q: Does LAr want to convert from Zebra to Objy using the framework or
via a stand-alone application? A: Would prefer using the framework.
Basic tests being set up. Algorithm can read in directly from Zebra,
but should write to Objy via the transient store
- Guess for timescale: end May: EDM workshop; end July: first writing
into Objy. Manpower available from LAr for the aspects relating to
them, which needs training and guidance
- Important to understand commonalities between systems by May EDM
workshop
- Q: LAr bookkeeping being worked on for the testbeam, hope to be ready
in August or September. A: Probably not going to be discussed during
EDM workshop. Q: To be understood to which extent we can address
offline issues as well
|
Fri
| 10:30
| Topic
| A. Putzer: NCB, EU grid bid
|
References
| Slides; HTML,
PS,
PDF,
PPT
|
Summary
|
- NCB mandate: articulate needs and wishes of all ATLAS institutes,
follow activities relevant for WWC, advise decision making bodies,
help to organise RC, WGs for specific work
- Issues: Cern Computing Review, role of RC, interaction with Monarc,
Grid activities, WWC model, SW agreements and computing MOUs, WGs
- RC hierarchy: influenced by results of re-assessments of data model
and sizes, Monarc and Grid findings, common solutions, advice from
review panel, data access method, requirements from funding agencies
- Planning for RC: Number and places of T1 centers known by 2001;
Basic WWC strategy defined 2002; typical sizes of T0 and T1 defined by
2003; role of T2 centres known by 2003
- In CTP, no decision as to how much computing will be distributed.
Clear that this is very important, research needed
- NCB working groups: (1) Interaction with Monarc (chair K. Sliwa):
collect information needed to model ATLAS specific aspects, provide
input for decision on RC hierarchy, computing TDR and MoU, link to
Monarc; (2) Grid activities (chair: L. Perini): collect information
about Grid activities, ensure information flow to collaboration,
define Grid tests; (3) Regional Centre working group (chair: S. Lloyd):
collect information about planning for RCs, prepare input for
Computing TDR and MOU; (4) Computing model (chair: A. Putzer): Collect
information needed for WWC planning, provide input to MOU and TDR,
ensure that needs and problems of all countries are taken into account
- EU Grid project: Objective: Enable the analysis of data stored in
shared large-scale distributed databases. Proposal to establish high
bandwidth network, demonstrate effectiveness of Grid technology for
real use, demonstrate ability to build, connect and manage large data
intensive computer clusters
- EU grid will collaborate with Globus, GriPhyN, ... profiting from
Globus middleware. EU Grid project focusing on rapid development.
Work packages: Fabric management, networking, mass storage,
global middleware services, data management, monitoring services,
workload management. Application areas: Physics application, earth
observation, biology
- Objectives of work packages: demonstrate feasibility of Grid
technology in HEP (MC and real data)
- Milestones (agree with testbed milestones and Atlas MDC plans): month
12: requirements studies, co-ordination with other WP, interface with
existing services; month 18: distributed MC production and
reconstruction; month 24: distributed analysis; month 30: test
additional grid functionality; month 36: extend to large community
- Role of Atlas in Grid projects: Atlas fully supports projects,
participates in setting up HEP applications and in designing testbed
systems. Grid will take a lot of resources, need to ensure that Atlas
needs are properly taken into account. Proper reporting and
communication is essential
|
Discussion
|
- Q: Is there no discussion about Tier 3 any longer? A: Would be at the
institute level, and hence is not necessarily an issue of the NCB
- Q: Should the role of the Tier-2 centres not be defined a little
earlier than 2003, and come as input to the grid activities? A: What is
meant in the timetable is the final decision. Of course, the decision
making process starts now
- Atlas workshop on Grid to be run in June (week of June 12?) in Indiana
- NASA project is very similar in scope and strategy to EU Grid proposal,
review later this year (3rd or 4th week of September) including Globus
activities
|
Fri
| 11:05
| Topic
| D. Malon: Grids, world-wide computing models etc
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- WWC model: Tier n, RC, ..., which depends on computing activities,
which depends on what is possible in distributed computing. How should
the software react?
- CTP written before "computational grids" were born - do the latter
change our way of working? Grids are not only about high-speed
networking
- Atlas sizing estimates still very classical, based on assumption that
the raw data will be reconstructed at CERN. Early grid milestones
suggest distributed reconstruction. Need to move towards a coherent
plan
- What do we need to learn from grid R+D? How do we make software
development and grid projects converge?
- Timetable: need stringent tests by 2002, hence enough grid software
must be ready by end 2001. This requires significant data base work
this year. What grid software is needed at what timetable?
- Timetables of EU grid and our MDC/TDR work not in sync, similar remarks
are in order about other grid projects
- Implications of grids on Atlas software: Important for database, some
potential (smaller?) impact on the framework. Answers not entirely in
our hands (grid projects go far beyond Atlas, partly far beyond HEP)
- What next? Organisation needs to be sorted out, link between software
development and NCB working groups; define what grid functionality
is required when. Plan required in order to shape the grid projects
|
Discussion
|
- NCB working group and software development need to collaborate, perhaps
in a common body
- Q: Depending on what we want to achieve with MDC 1, the timing is not
badly out of sync. A: The problem is that we need as much input as
possible for writing the Computing TDR
|
Fri
| 11:25
| Topic
| K. Sliwa: Monarc
|
References
| Slides, MONARC Phase 2 report:
HTML,
PS,
PDF,
DOC
|
Summary
|
- Good simulation tool exists which helps planning the computing model.
Main results: 622 Mb/s links needed between T0 and every T1, possibly
the same for the links between T1 and T2; load balancing needed on
data servers; care needed for load balancing at job submission stage
- Implications on Atlas computing model from Monarc simulations:
simulations assume an Object Database capable of maintaining
associations between objects, and providing a uniform logical view of
data. An Object Database seems to be essential to build an effective
system. Regional Centres are technically useful only in case of
limited bandwidth to CERN
|
Discussion
|
- Q: What would the minimum bandwidth be to make either T1 or T2 centres
obsolete? A: Function of what data are accessed how frequently
- Q: Do the simulations assume that the data are shipped to the CPU?
- Distributed resources are less likely to be vulnerable to attacks,
and the overall integral is likely to be larger than if all resources
are pooled at CERN
|
Fri
| 11:50
| Topic
| R. Gardner: GriPhyN
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- Proposal to NSF ITR program, submitted in April, collaboration of
scientists from Atlas, CMS, Ligo, SDSS, and computer scientists
(Globus, Condor, SRB)
- Research areas: data catalogs, transparent caching, automated data
handling, resource planning, execution management, security. Expected
deliverable: software toolkit
- Grid environment: Atlas applications (Athena, simulation, ...), Tools
layer (planning, estimation, execution, monitoring), Services (archive,
cache, transport, catalog, security, policy), Fabric (computer,
network, storage, ...)
- GriPhyN activities in Atlas: Linkage between Athena, database,
simulation, and the grid toolkits; test-bed grid
- Timelines: Year 1: initial grid enabling services; 2: centralised
virtual data services; year 3, 4: ever larger capacities; year 5:
enhanced data grid tool kits
- Atlas: integrate grid developments with software efforts; organise
effort around objective deliverables and synchronise with milestones;
identify suitable demonstration projects
- Workshop planned at Indiana, tentative date: 16/17 June; aims: identify
work areas, assing specific sw development tasks, develop sensible
grid computing plan
|
Fri
| 12:15
| Topic
| E. May: Particle Physics Data Grid
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- Started in August 1999, originally part of New Generation Internet
program, addressing long-term data management needs of HEP and NP
- Goals: network and middleware infrastructure; tools employed: Grand
Challenge, Globus, Storage Request Broker, NetLogger, PingER,
Surveyor, SAM, Condor, sfcp
- Workplan: deploying PPDG services, tests and demonstrations,
development of architecture and tools - all in an iterative process
- Deployment: Atlas US access to testbeam and MC data, CMS access to MC
data, CDF run 2 data access, D0 MC data, BaBar data replication at
IN2P3, Star data replication at LBL, CLAS replication and caching
from JLAB
- Testbeds: Principal testbed, local infrastructure, network measurement
- Architecture and tools: replica catalog and data movement functions,
network aware meta data cataloging
- Why Atlas? Grid will provide a uniform and transparent interface to
distributed resources; existing model environments
- 150 GB of TileCal testbeam data used as a test case
- See http://www.cacr.caltech.edu/ppdg/
|
Fri
| 14:00
| Topic
| Reconstruction
|
References
|
|
Fri
| 17:00
| Topic
| Graphics
|
References
| Slides: see http://atlasinfo.cern.ch/Atlas/GROUPS/GRAPHICS/AgendaMinutes/00.05.12.m.html
|
Sat
| 09:00
| Topic
| N. McCubbin: CERN review of LHC computing
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- Announced by Hans Hoffmann in September 1999 during Marseille LHC
workshop
- Aimed at assessing state of work, providing input for Computing MOUs,
recommending common actions. First draft of report due by Summer 2000
- Organised by steering committee and three technical panels with some
overlap. All experiments represented in steering committee and in
panels
- Software panel: most active one, first (public!) session was a full
week. Spirit was lively, probing, robust; appreciated by many
participants
- Major points in Atlas session: TDR software is a major and impressive
achievement; keeping options open (eg. Objectivity) has supporters and
critics; planning of MDC with respect to TDR questioned; communication
with IT was a major topic throughout the week
- More detailed discussion with panel on Objy/database and planning
- Other panels somewhat less gruelling
- Discussion about MDC in 2003, ideas about common facility with maximum
affordable number of interconnects (boxes)
- Hoped that reference numbers for computing equipment will emerge from
the panel work
- Overall report to be produced by steering committee, to be submitted
to CERN DG and Research Board
|
Discussion
|
- Q: Does it make sense to hope that a large facility will be funded in
2003, and that it will deliver the insights hoped for?
- Q: Has the Resources panel addressed the manpower issue? A: Not yet in
any detail. The numbers we start from are quite similar to other
experiments
- Q: Will there be a period for public comments on the report? A: Cannot
be promised, but thrust of the report will certainly be discussed
- Q: Does the review process implicitly assume constants such as the IT
staffing level? A: Not necessarily... if we insist in reasoning about
it, it may have an effect
|
Sat
| 09:45
| Topic
| S. Fisher: Tools and Java
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- XML for documentation: not much progress on the tools. DocBook
advancing, but PDF is still poor. In Atlas, requirements DTD and XSL
style sheet available (Lassi). How do we want to go about them?
- Code checking: expected to be CodeWizard from ParaSoft; we will
probably not need it since functionality will be implemented in
Together. CodeWizard to be made available soon by SDT at CERN -
suggestion to get experience with installation at CERN, and see what
Together does before buying CodeWizard licenses
- Together: 3.2 is good for C++ and excellent for Java. Interworking
with Rose fine. Large memory machine required. Can we agree to use
Javadoc style comments in C++ code in order to profit from the
functionality Together offers?
- Java: used some of Geant4 examples to try out interface to C++, using
either Corba(ILU) or JNI. Will try and implement an Athena algorithm
in Java. Suggest to make sure that new Athena developments would not
go against Java. Notes being prepared on how to use Java in Atlas
(coding standards, interworking with C++), and on pros and cons of
Java
- Release tools: Best strategy is not necessarily the best tool.
Requirements list exists, discussions held recently. SRT: add-ons
were liked, but people want an easier way to cooperate during
development. Try now to achieve agreement on performing an evaluation
of tools which must take into account the existing investment into
SRT. CMT appears to be a strong candidate to be evaluated
|
Discussion
|
- Q: Why are we spending our time writing documentation systems? A: We
don't - whenever we can find something on the market, we use it
- Q: Why do we not simply use the current recommendation, that is to say
Framemaker? A: We do. Not suggested that we decide to change now
- Q: Are we convinced that the code checking functionality of Together
will be at the level of CodeWizard? A: Java support gives rise to
hope, but there is no garantee
- Q: When will we be able to use a checking tool adapted to our C++
rules? A: Not known
- Q: Does Java compliance imply that templates should not be used in
C++? A: Not necessarily
- Q: Is the ART effort Atlas internal, or does it involve other
experiments? A: Not for the time being, after previous experience
- Q: Should we not concentrate on interchange formats rather than the
tools to produce them? For documentation, HTML would be a clear
choice
|
Sat
| 10:50
| Topic
| N. McCubbin: Platforms
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- IT suggested via Focus to reduce number of platforms. Atlas generally
happy with the 'Linux + 1' proposal
- Beginning of 2000: Atlas dropped support for its offline software on
AIX
- IT proposal to close HPPLUS, DXPLUS, CSF for all but LEP by end 2000,
HP and DEC operating systems to be frozen
- Proposal to drop support for Digital and HP for Atlas offline support
by end 2000
|
Discussion
|
- Digital has a very good Fortran compiler that helps spot mistakes
- Implications on Atlas HP machines at CERN
- Q: Need to know what lifetime a supported platforms have. A: This has
been discussed in FOCUS
- Q: Have there been any reactions yet? A: Not yet, but we need to make
a statement by May 22
|
Sat
| 11:15
| Topic
| H. Meinhard: Workshop summary
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- (see slides, I'm not going to summarise the summary here)
|
Sat
| 12:00
| Topic
| N. McCubbin: Conclusions, closing remarks
|
References
| Slides: HTML,
PS,
PDF.
PPT
|
Summary
|
- Interesting and exhausting week...
- Release of May prototype, tutorial is important success. Review will
address the strategic direction
- Encouraging progress in what we use framework for: simulation, graphics,
reconstruction, ...
- Shortage of effort in infrastructure: core, SRT, training, documentation
and QA
- Plan: delivery of first version in sight, has been useful already
- WWC/grid: Do we understand all implications?
- Thanks to LBNL local team, Helge, all participants
|
Discussion
|
- Craig: Thanks to all participants, NERSC and LBNL management, Roberta
Boucher, Norman, Helge, ....
|