Day
| Time
|
|
|
Mon
| 14:00
| Topic
| N. McCubbin: Introduction, news, general status
|
References
| Slides: HTML,
PS,
PDF,
PPT
Directorate paper on CERN LHC computing review:
HTML,
PS,
PDF,
DOC
|
Summary
|
- No details of items to be covered later
- AgendaMaker - let's try and improve it
- Very crowded week
- Special this time: Physics performance groups, reports from system
software. Suggestions concerning agenda to
Helge
- CSG has met twice, another meeting by the end of the week. Minutes,
E-mails etc. are open - check the Atlas mailing list archive
- Technical Group: being established, convened by Maya Stavrianakou;
will probably not formally meet too often, but mostly e-mail
communication. Higher-level issues to be eventually dealt with by the
CSG
- Events outside Atlas: Marseille LCB workshop showed noteworthy spirit
of collaboration, significantly better than Barcelona workshop
- Geant4 has a new spokesperson: John Apostolakis elected until end 2000.
G4 now to be tested massively against test beam results, Geant3, ...
Talk about Geant4 later during the week
- LCB suspended for approximately 6 months, pending outcome of the CERN
LHC computing review process
- CERN LHC computing review (see reference above): still aiming for
preliminary results until
"spring 2000". Norman thinks that serious implementations will not
happen before end 2000. Paper describing the review now approved by
CERN directorate, and available on the Web. There will be a steering
group and three working groups, one each on hardware, software, and
resource management. Overall chair: Siegfried Bethke (Munich);
world-wide computing: Denis Linglin (Annecy), software: Matthias
Kasemann (Fermilab); management and resources: Mario Calvetti
(Firenze). First steering group: before end 1999. Intended to be
helpful for the experiments as well
- LHCC review of Atlas computing: week March 6 2000, again by Matthias
Kasemann
- Thorough understanding of our project required ("the PLAN"), not only
for the reviews, but also for ourselves. Serious go until next
workshop
- Computing MOUs / software agreements: CERN RRB informed by Atlas that
we want to proceed; discussions in Atlas ongoing
- Possibilities for EU money (Framework 5) being investigated
|
Discussion
|
- Q: Geant4 is a hot topic indeed, there is strong need for both formal
and informal communication, for which we should push as much as
possible. A: Liaison with the users is indeed very important, and is
perhaps not actually being given enough weight by Geant4
- There are even rumours saying that experiments that have not signed
the MOU are not allowed to talk to the Geant4 team...
- Q: Has Atlas software put in any requests for the Intas program?
A: Not known. (V. Vercesi) There are some projects in Trigger/DAQ
|
Mon
| 14:50
| Topic
| M. Stavrianakou: Repository and releases
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- Packages: 45 top level, 152 total; in release: 42 top, 121 total.
Not yet in release: control, data base, doc, Tilecal. To be
implemented: Atlas simulation with G4, analysis, new reconstruction
modules
- Contributed software: eg. Atlantis
- External software: Expat, classlib, pccts; fragments for external
packages, allowing for version control without pulling external
packages massively into the repository
- External software outside the repository: Isajet (installed and
maintained by Jim Shank), Geant4 (4.0.1p01 installed by Maya,
cross section data, visualisation); shared libraries of Cernlib coming,
to be installed and maintained by Steve O'Neale. Intention is a
stop-gap solution; we should press for getting things done correctly
by the responsible people outside Atlas
- Supported platforms: currently HP, DEC, IBM, Sun, Linux (not all
packages for all platforms though). Proposal to drop AIX support by
early 2000, much of the C++ software not working anyway; DEC should
be dropped by spring 2000 at the latest (Atrig is no longer supported
on it); HP should be considered for phasing out in 2001, depending on
the Focus discussion
- About 36 releases in 18 months, significant progress in automating
release building and in documentation. In order to be complete, the
following is required: entire release tree for sources, subtrees for
desired platforms, external software, data files, and SRT version to
build the release
- Production release: targeted for TDR software based on cvs/SRT.
Slowdown due to delays in testing; atlsim and atlfast required on HP
and Linux. Intention is to provide production release once the missing
binaries are there, at the same time the test area would be frozen
(remove write access) and eventually removed. This may actually be a
larger issue, since non-developers are not heavily using cvs/SRT yet
- Testing of candidate production release: Calorimeter testing done and
documented by Monika Wielers and Martine Bosman, ID testing in
progress (Dario Barberis, David Rousseau); muons reported to proceed
as well. Outstanding problems: hits in TRT (HP vs Linux); iPatRec
testing needs to be re-done after fix in pixel common block
- Repository, build and release tools: installed and maintained by
Lassi Tuura, offering SRT, CVS server, build server, build logfile
analyser, cvs browsers, list of package coordinators, dependency
grapher
- Outstanding issues: generator packages, package author list,
circular dependencies, releases both in optimised and debug mode,
long-term tool development and support
|
Discussion
|
- Q: How do the fragments for the external packages work in outside
labs? A: They can be disabled, documentation to follow
- Q: How can we seriously talk to Geant4 about testing if we were not
able to test our own software during a year? A: That's not entirely
the case, the last CMZ version was well tested
- Q: Will there be an Atlas policy on RedHat, gcc etc., or do we simply
follow the IT policy? A: That is currently being discussed. It is not
clear that our code will run on RedHat 6.1
- It is very important that we make progress with a production release
of the TDR software based on cvs/SRT. We are still awaiting test
results on the LAr EM part; the status of the muon testing is not known
- In terms of operating systems and compilers, it seems that we will need
to support more than one version of both
- Q: Is it still true that many packages are disabled on many platforms
because they won't compile? A: Yes; some effort is done to keep a
matrix of packages and systems they build on up to date. Actually,
most packages are explicitly enabled on some platforms only; we
should probably move to a strategy where we would explicitly disable
some packages on some platforms, and enable by default
|
Mon
| 16:10
| Topic
| N. McCubbin: Atlas computing platforms
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- Trigger: Focus meeting on Thursday afternoon this week
- CERN/IT proposal: end of support for system, Cernlib, CLHEP etc
for AIX, Digital Unix, HP, and Irix between end 1999 and end 2001;
dates for Solaris to be defined, to be kept as a safety net
- General flavour of reactions within Atlas: Many said that there is no
problem; main objections: should not rely on one single operating
system; is an open source project really trustworthy enough; test
beam measurements require RD13 software, currently only available on
HP-UX
- Proposed slide for Focus
|
Discussion
|
- Q: What does CERN mean by "Linux"? A: Linux on x86 or, in the future,
the IA64 architecture
- Q: What about Sun support? A: IT proposal does not specify an end date
- Development of DAQ software is depending heavily on commercial
applications that are not available on Linux, but are on Solaris.
Solaris will certainly need to be used for quite some time. Another
reason to keep Solaris are mission-critical file servers
- Q: There are many institutes which are involved in running experiments;
for them further support of the proprietary Unix systems is vital in
order to protect their investment. A: The enquiry in Atlas did not
support this point of view
- Since we are not interested in NT, the prospect of having just one
operating system is worrysome. We should be asking for long-term
Solaris support
- Q: The result of the enquiry may not reflect actual feeling of people
because of the way the question was asked. Investment protection must
be taken into account very seriously. A: This is by and large done by
the proposed dates
- We should consider that essentially all vendors are preparing for
operating systems for the IA64 architecture. How will CERN react to
this?
- Q: Do people need to upgrade to Solaris 2.8 in order to profit from
the 'safety net' support? A: That's probably a typo
- We need to understand in which respect we rely on IT support, and to
what extent we can afford to be on our own
- Probably having adequate capacity under Solaris as work group servers
at CERN would be sufficient - there is no need for outside institutes
to invest into Solaris
- How do we address the problem of defining the correct operating system,
compilers, ....? At what frequency should they be updated? A: This is
not a new problem for Linux, we had the same problem with all other
systems
- Q: Is it possible to ask CERN to define a standard Linux environment
which will be sufficient to run all CERN supported software? A: Yes,
that has been done in the past already. The CDs offered by the Cern
bookshop allowed for an easy installation of such a system
- Q: What about the planning of IT division? Are there any long-term
commitments? A: There is a cycle of 6 years for exploiting new
platforms
- Q: What about the compilers? Do we need a system that supports more
than one compiler version on a given system at a time? gcc 2.95 is
already out - should we stick with egcs 1.1.2 for the next 18 months
or so? A: gcc 2.95 was found not to work with g77. One cannot easily
install gcc 2.95 over a system built with egcs 1.1.2
- Q: How is the future of NT seen? A: At the moment, Atlas has no
plans to invest effort to get its software running under NT. Q: Offers
by individuals to help with NT support have been turned down in the
past
- Test beam installation has been frozen some time ago - HP is required
for that purpose, at least for the DAQ kernel, without any upgrades.
Support for a stable version of the HP operating system will be
required
- What about dropping platforms from the Atlas software releases?
Dropping AIX by
beginning 2000 does not seem to be a problem. Digital Unix needs the
clear statement of requirements from the TDAQ community - no strong
case is seen for its support beyond the trigger TDR. Probably,
development can already stop now for the platforms for which
we do not intend to provide releases in the near future
- The problem with Digital Unix is that shared libraries would require
dedicated work now
- Saclay would like to see further DEC support of Atlas software
- End of AIX and DUX builds will be announced
|
Mon
| 17:15
| Topic
| S. Fisher: Tools (Together, StP, Web, documentation)
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- Case tools: StP used in the past, licences (at CERN) available for
everybody. Together seems to be liked better, site licence exists.
Questionnaire has shown that more than one case tool is required
- Together 3.1 should be out in 2000, many interesting improvements.
StP considering a Linux version, but still waiting for them to
synchronise NT and Unix versions
- New in Together 3.1: supporting UML 1.3 (new diagrams), support for
patterns, out-of-the-box integration with cvs, improved support for
metrics, sample scripts, documentation template designer, improved
HTML generation (code doc is Javadoc style), Swing as user interface
- StP maintenance due for renewal - what do we request? Does anybody
prefer StP over Together? Should we keep some licences just in case?
Unfortunately, no detailed usage statistics available
- Code checking tools: working group will finish report soon. Two strong
candidates (CodeWizard, QA-C++)
- Configuration management: following requirements document, CMT based
proposal received; SRT based proposal expected soon. RPM based solution
considered as well. Should pursue SCRAM. Would appreciate help in
checking documents and evaluating proposals
- Web: of the suggestions of last time, the tidying up has been done,
and the new search engine has been installed. There is now an index
page. For adding pages to the Web, if there are new computing pages,
please inform Steve. For pages at CERN, Steve should be contacted
as well. Need to go to a system where responsibilities are clearly
defined
- Documentation preparation: XML taking off in all areas, but producing
paper is tricky. Different style sheet languages (CSS, DSSSL, XSL).
Lots of tools around, but interoperability is still a problem. Apache
has recently taken XML on board. Projects: XML parsers in Java, C++,
perl; XSLT stylesheet processors, XSL formatting objects
- For now, Latex based solution still exists. Perhaps in spring we can
envisage migrating to XML
|
Discussion
|
- Q: Is it intended to migrate projects from StP into Together? A:
No, not at this point in time. (J. Hrivnac) The design of Graphics
has been migrated to Together by hand
- No immediate need for StP, policy to keep a small number of licences
is considered reasonable
- Training and documentation about Together should be provided
|
Mon
| 17:40
| Topic
| G. Poulard: Muon software: status and plans
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- Mainly a summary of the Eilat workshop
- TDR software: simulation and reconstruction fully integrated in Atlas
framework, some studies still needed (CSC, more detailed MDTs, ...).
Need to keep the full chain running
- Muon data base: filling of RPC geometry from AMDB, testing of
transformation methods, test whether move to AGDD is feasible
- Draft 1.0 of muon spectrometer database task available, identifying
important issues. Task coordination and resources addressed
- Evolutionary development from AMDB to AGDD; AMDB will be maintained
until a fully tested and proven replacement is in place
- Detector description is in good shape, as is the event model,
calibration and alignment being studied now
- Simulation: intensive effort started, training done etc. Lots of
questions. Long-term plan established; will concentrate on test beam
simulation for the next 1...2 years. Detailed action plan until
mid 2000
- Reconstruction: pending issues (CSC implementation, misalignment,
event filter); lots of questions to be clarified
- Waiting for new framework, but planning to migrate code into Paso in
the meantime
- Reconstruction packages: Muonbox (main package so far). Code will be
wrapped, not translated into C++. Amber pure OO package, being ported
to Unix, maintenance being discussed - solution is in sight. If Amber
fails, a new package will have to be envisaged
- Are going for a full chain of generating Geant4 events, and
reconstructing them through Muonbox
- Testbeam software: first time it was discussed... agreed that it is
part of the general software. Communication to be improved between
testbeam and offline communities; some responsibilities to be
clarified. Good progress on a number of projects already, suggested
to take a few as pilot projects for Atlas Offline
- Documents available on the Web
- MOUs well received in the community, more detailed discussions to follow
|
Discussion
|
- Q: When do you envisage to migrate from bare-bone ASCII to XML? It
would be good if it could happen before end of next year. A:
XML is being tested now. The first goal will be to reproduce exactly
the bare-bone AMDB from the XML version, then new features will be
implemented. Essential to keep a full system running
- The Geant4 builder is fine for prototyping, but not for the final
product - the final geometry should be created by a human being
- Q: What is Paso being used for? A: Currently as a proof of some of the
new concepts
- Amber can probably be migrated into Paso quite quickly, since it has
been developed within the Arve framework, with which Paso shares many
ideas
- It may not be easy to satisfy specific test beam simulation
requirements; some coordination is probably needed
- 'Test beam' setups should also comprise the cosmic ray facility for
production tests
- In order to be able to compare test beam data with Geant4 based
simulations, need to define which test beam data will be used.
Test beam and G4 people need to be brought together
- Q: It is desirable, and probably conceivable, to have an implementation
of a simple magnetic field in Paso. A: That may actually be much more
involved - to be seen
|
Tue
| 09:05
| Topic
| N. McCubbin: Architecture
|
References
| Slides (Stephen Haywood): HTML,
PS,
PDF
|
Summary
|
- Presenting slides prepared by Stephen Haywood for the Executive
Board
- ATF: Mixture of OO gurus and newcomers into the business, this
mixture has driven the result
- Input gathered from within Atlas, but also from LHCb (Gaudi document
much appreciated), D0, Alice (and particularly BaBar and CDF, which
were represented in ATF)
- ATF focused on common execution framework; aim to provide groundwork
lasting fairly long. Effort done to understand Atlas requirements, and
to start designing a system from scratch, following USDP in an
iterative and incremental way
- USDP to be used in a flexible way, adapted to our needs. Found abstract
diagrams useful
- Complementary approach, also followed by ATF: Put in our knowledge and
experience from previous projects - what components do we expect to
have? Two approaches expected to converge
- Ongoing work for Event, Detector Description, Database
- Decisions of ATF: OO, C++ with an eye on Java, separation of data and
algorithms, deferred access to data, independence of database
supplier by separation of persistent and transient data, access to
event data via transient data store, transient data store used for
transmission of data between modules, single source for detector
description
- Control flow: Traditional approach chosen over a pure data driven
approach (Object Network)
- Follow-up working group (Architecture Team) strongly recommended by
ATF: conherent and dedicated team of OO literates, not a bureaucratic
committee, with a strong emphasis on communication with the detector
and physics communities. Aim is to provide prototype by May 2000
- Conclusions: set out architecture is a big, complicated and specialised
job
|
Discussion
|
- Q: What do we hope to have in May 2000? A: There is some flexibility
to the Architecture Team. Hope to have a transient store, the
infrastructure for communication between modules, some event input and
perhaps detector description
- Q: To what extent are the ATF decisions indicative, and which ones are
binding? A: We should take a pragmatic approach here. If the
Architecture Team finds good arguments why an ATF decision doesn't
make any sense, it will be reconsidered, but the Architecture Team is
not supposed to question and reconsider all ATF decisions ad initio
- Q: Is ATF still available to reply to questions and explain their
choices? A: As a formal body, no - it is going to be wound down by
Management. Of course, the ATF members are still around. It is
strongly intended to have a significant overlap of ATF and
Architecture Team membership
- Q: The transient store will not necessarily be written out entirely -
isn't reproducibility an issue? In CDF, the decision was made to make
all transient event data persistent. A: Wasn't it persistifiable, not
persistent?
- Q: An important issue is whether or not the data in the transient store
can be modified. A: ATF has addressed this and established as a
principle that they cannot. The Architecture Team will revise this and
see whether this is practical
- Q: One of the benefits of Object Networks is that actions on demand can
easily be implemented. Has this been considered in the decision? A:
Yes, but it is not clear that action on demand is a desirable feature
at all
- Q: Where will things be put that need to be communicated between
events? A: Detector description or conditions, or statistics. This
would allow to cover the requirements of e.g. calibrations
- Q: Gaudi had an external review of the design, before the coding
started; do we have similar plans? In any case, the possibilities for
the public to influence the ATF work were poor. Strong suggestion to
have an open discussion meeting at the startup of the Architecture
Team. A: We need to find the most effective way to communicate and
discuss, perhaps e-mail exchange is more effective than an open
meeting. Q: Well, it wasn't really obvious that ATF has taken e-mail
input into account in a reasonable way. A: That's not quite fair...
- Q: Obviously, Gaudi took quite some time for the first implementation;
can we expect that we will take the same time to us, or perhaps even
longer? Can we perhaps profit from other people's experience? A: We
don't really have a clear idea of the time scales yet; collaboration
with other projects will be carefully considered. Q: Collaboration is
even essential, unless we find use-cases (for the framework!) which
are different from the ones of other projects. A: Probably there aren't
any different use-cases. Collaboration is easily accepted as a matter
of principle, but there are some practicalities to be sorted out. Q:
We must be flexible towards new use-cases or requirements to the
framework
- Q: There is no overall diagram of the architecture in the ATF report -
does that imply that the ATF did not have an overall view? A: Probably
you want something like figure 3.1 in the Gaudi document... the list
of expected components, and the description of their interactions,
already represents a global view. A diagram would have been too
cluttered
- Q: To which extent has ATF considered scalability issues? For example,
it is not clear that the transient event concept, that may work for
LHCb, will work for Atlas as well. A: This issue has not particularly
been addressed, but scaling issues were addressed in the context of
the evaluation of Object Networks
|
Tue
| 10:50
| Topic
| N. McCubbin: CHEP 2000
|
Summary
|
- Chep during week of February 7th in Padua
- We know of five abstracts submitted by Atlas
- Selection of abstracts to be presented soon
|
Discussion
|
- (L. Perini) Lots of abstracts on software / framework / architecture
session, session will need to be extended
|
Tue
| 11:00
| Topic
| H. Meinhard: Problem reporting and tracking
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- Evident that more than direct communication personally, via phone or
e-mail, or via mailing lists is needed - the same bug in Atlas software
has been "found" three times
- System set up in the past based on Gnats which needs to be phased out
(not Y2K compliant, not developed further); usage was disappointing
- CERN-IT, which have provided the service for Atlas in the past, have
migrated to Remedy
- Technical issues involved: Tree of categories (possible to fill up
to three levels in contrast to Gnats with its flat list), DAQ will be
joining; report submission via Web interface or routed e-mail;
notification to "notification group" per category; automatic,
escalating reminders of non-closed items
- Cultural issues: Why has usage of Gnats been at a very low level? Other
HEP (and outside) projects have been more successful - strong push
needed to overcome the threshold
- Conclusions: Migration off Gnats inevitable; Remedy fine on technical
grounds; strong push needed
|
Discussion
|
- Q: It is obvious that a problem tracking system is needed; how far
away are we from using it? A: Once we have established the list of
categories, and the notification list for each one, work can go ahead,
and is expected to take a few days
- It is important that a reply is generated automatically, with an
indication of who is supposed to follow up the problem. Also, a human
being must ensure that open reports are followed up
- Q: Is it possible to link Remedy with a solution data base? A: We
understand that this is currently being done for the IT applications
- Q: Are there any licensing issues for maintainers outside CERN? A:
Certainly not for the Web interface; to be checked for the native
clients
- BaBar is using a problem tracking system (Remedy), Hypernews and
mailing lists all at the same time, with good success. A: We are
actually waiting for an implementation of threaded news groups, which
is said to be straightforward
|
Tue
| 11:20
| Topic
| D. Candlin: PASO
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- Context and relationship of Paso, Event, Slug/G3, Graphics, user
application
- Paso profited a lot from Arve
- Paso is an interim solution
- Main limitation is that application is assumed to be a single module -
no communication facilities between application modules provided
- Successful tutorial on Monday, found that default CERN Unix
environment is not sufficient to get easily started
- Status: Activity on SCT space points, Atlantis, Wired
- Next to come: more detectors, parameters, Aravis upgrade
|
Tue
| 11:30
| Topic
| J. Collot: LAr software: Status and plans
|
References
| Slides: HTML,
PS,
PDF
|
Summary
|
- Need to face the future in a coherent way
- Training for LAr community: John Deacon's courses, G4 course, some
activity at home labs (BNL, Grenoble, ...), Web training sites,
reading
- Time scale wishes: first fully functional new software in two years,
G4 implementation and tests in one year, test beam sim/rec/DB in one
year
- Architecture: first directions from ATF, no major objections from LAr
community. Crucial use-cases to be checked, Architecture Team to be
launched asap - important for system software
- Inception phase: first step: get started, collect use cases - finished;
second step: more use cases, a bit of analysis and design - finished;
third step: identify crucial use cases, more analysis and design.
End of inception phase by ~ January 2000: crucial use cases, draft
models, entity lists, tentative architecture, risk identification,
priorities, planning of next phases
- Ongoing activities: G4 implementations, Paso, database, physics
performance
- Reconstruction: ATRECON flow described in
/afs/cern.ch/user/s/schwind/public/code.ps;
algorithms to be re-used for first OO version; in parallel, OO analysis
ongoing and being implemented in Paso
- Geant4 activities in LAr community boosted by courses, about 20 people
committed, OO design. Hadronic calorimetry: geometry not too
complicated, but physics of hadronic showers not well understood
(neither was it in Geant3). EM calorimetry: really complicated
geometry, but physics under control
- Geant4 feedback: full EM geometry implementation takes 100 MB,
parametrised geometry much too CPU intensive - waiting for boolean
volumes
- Good agreement of response to 20 GeV electrons, slightly less energy
deposit in G4, resolution agrees well. Energy deposit in absorbers
quite different between G3 and G4
- Data base: concentrating on XML detector description, package under
evaluation, probably difficult to use for final G4 code design and
implementation
- Performance studies beyond physics TDR being pursued
- Computing MOU: okay for hardware, regional centres, infrastructure
software; but physics software - do we really want an MOU for it? Or
any other mechanism that could be revised annually?
|
Discussion
|
- Real problem with the accordion geometry is much larger than is shown
in the test
- Calibration is a complex issue, infrastructure must be in place early
enough
|
Wed
| 09:00
| Topic
| D. Malon: Data base and detector description issues
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- Reporting mostly status as of the database workshop in October
- Watching ATF progress, trying to adapt to ATF decisions
- Issues addressed: How to achieve independence of data base supplier,
separation of algorithms and data, transient/persistent mapping; what
features of the chosen data base manager can one use and still remain
independent of supplier?
- Looked at D0 approach (D0OM) and LHCb (Gaudi); D0OM providing automatic
translation of transient to persistent classes
- Detector description: Presentations by Stan Bentvelsen, Christian
Arnault, Julius Hrivnac, Marc Virchaux
- AGDD (XML) status: currently closely related to G4 geometry definition.
Generic model: used to derive application view; currently close mirror
of XML. Traversal of elements based on visitor pattern
- Many people now using XML, less so with the generic model and the
Geant4 detector description. Consistent scheme for identifiers is
required
- Tutorial on event data access through Paso; people identified to
complete event access to TDR data
- Particle Physics Data Grid: Next generation Internet project involving
US labs and universities, targeted at large range of HENP experiments,
but specific attention on Atlas requirements and use-cases
|
Discussion
|
- Does the PPDG bring the data to the code, or the code to the data? To
which extent is it relevant for LHC computing?
- Q: There are two models being worked on, the generic one and the XML
one. Do we need both? A: We need to see to which extent the generic
model can be equal to the XML one, but it is clear that the generic
model will need to provide for some non-static functionality, and take
into account data that might not be in XML files, such as misalignment
- Q: Given that the design work reasonably started from the simulation
needs, when will the reconstruction be addressed? A: The design of the
AGDD DTD has taken the requirements of reconstruction into account from
the beginning
|
Wed
| 09:30
| Topic
| N. McCubbin: Future activities of RD45
|
References
| Slides on Atlas reaction to CERN-Risc proposal:
HTML,
PS,
PDF,
PPT
Slides on future activities of RD45:
HTML,
PS,
PDF,
PPT
Atlas attitude towards future activities of RD45 (letter of N. McCubbin
to LCB chairman): Plain text
|
Summary
|
- Summary of Atlas attitude towards reducing Risc proposal
- Atlas asked on its view about the future RD45 activities
- Atlas much interested in Object Oriented database management systems,
baseline choice for the time being is Objectivity/DB, but asked for
careful assessment of the needs of LHC experiments for all various
kinds of data involving users and data base experts
- No strong Atlas point of view on the organisation and structure of
the work to be done
- This Atlas position has been communicated to the LCB just before it
got suspended
- RD45 referee report (Irvin Gaines, Norman McCubbin): Assuming (as
suggested by BaBar and other experiments) that Objy can work in HEP
environment, still serious concerns about size of OODBMS market and
single supplier
- Plausible scenario is that Objy would remain only viable commercial
OODBMS solution for foreseeable future; would require to be ready to
implement a fall-back solution on a timescale to be defined
- Possible way forward: Support for existing usage: support group;
assessment of needs, dialogue with Objy, recommendations: RD45 (?), but
need commitment and input from LHC experiments, and from other areas of
relevant expertise. If a home-grown product to be developed, a new
project must be launched. Hans Hoffmann to decide
|
Discussion
|
- Q: What does it mean to "be ready"? A: To be defined...
- Q: There are rumours saying that Oracle has announced an OODBMS. A:
It is known that for a couple of RDBMS, object-oriented extensions are
on offer; also third-party products exist offering an OO interface to
a RDBMS back-end. Functionality and performance still to be evaluated
- Q: What about the risk assessment of RD45? A: This is clearly a task
to be continued. Whether this will happen within RD45 or in another
project depends on the decision about the future of RD45
- It is clear that there might be a variety of solutions around, as is
already the case now. Be aware that X (months for developing a fallback
solution) may be a large number
|
Wed
| 10:00
| Topic
| H. Drevermann: Atlantis
|
References
| Slides: HTML,
PS,
PDF
|
Summary
|
- Need for more than one event display program, Atlantis is just one of
them. Main author is Hans, with significant help from a number of
people
- Aim of event display: E.g. check pattern recognition for lost, fake,
wrong tracks; check calorimeters, calo-track association, ID-muon
association, secondary vertices
- In high-occupancy detectors, check of the pattern recognition is not
straightforward
- Without pile-up, things look a lot cleaner already; by eye, most tracks
recognised in TRT, some in SCT, none in pixels
- Problem is that per phi segment, there are as many hits in pixels as
in TRT; x-y plot not adequate. Phi versus rho is more appropriate, but
is more abstract
- In case of low occupancy, there is no problem, but graphics is supposed
to help with the difficult events
- If rho scale is compressed, tracks are seen at a larger angle, which
helps guide the eye. If rho is replaced by dip angle, there is a
clear separation of tracks, and signals of one track are close, but
pt and charge are difficult to judge
- Solution: V-plot (phi versus dip angle, with indication of the distance
of the hit to the outer SCT border). Opening of V clearly indicates the
charge. Real three-dimensional information in the picture, not a
projection. Two opposite Vs could indicate a neutral decay
- Nice correspondance of V-plots for hits and tracks; V-plots very well
suited to select a region of interest (shows only the hits in that
region in other projections)
- V-plot very sensitive to z shifts of vertex - Vs get asymmetric.
Problem is that the vertex needs to be approximately known. Without
track reconstruction, this is difficult; but pairs of hits in pixel,
or triples with at least one pixel hit, can be histogrammed and hence
the vertex z position be obtained
- Even with this information, a V-plot of an event with pile-up is
fairly hopeless. Lots of noise, only few hits. Hit filter developed
which much improves the situation. Filter creates histogram of phi
versus theta, filling the number of layers and cutting on cells with
at least four layers. Even allows for colour assignment of tracks.
Tracks below 1 GeV cut by current filter parameters. Comparison with
MC truth very good. Even if only filtered hits are drawn in x-y
projection, one cannot recognise the tracks by eye
- Timing still reasonable
- Different topic: display of a helix track with its hits.
Transformations exist which display the helix as straight lines, with
the ingredient hits
- Scale problem with ID-muon matching; radial fish-eye projection helps,
as does the clock transformation (blow-up of phi); latter one needs a
direction of preference
- Space points, pattern recognition, global tracks wanted
|
Discussion
|
- Q: Is it possible to develop an automatic pattern recognition on the
basis of the V-plots? A: Yes, Astra is essentially doing it, but it
did not have the z determination via histogramming yet
- Q: What about tracks from secondary vertices? A: If the decay happens
within about 1 mm from the primary vertex, it cannot be seen. If it is
within seven layers, it must fulfil the four-layer criterion to survive
the filter. The asymmetry of the V can be used to determine the z
position of the secondary vertex
- Q: Perhaps all tracks of an event are only interesting for B events,
where the muon can give the vertex z position. For other events,
displaying regions of interest which indicate direction and momentum
is probably enough. A: That doesn't really help - displaying a region
of interest may not be any easier than displaying a full event
- It is even dangerous to start the design of a display program by
assuming that only part of the information will be displayed
- An additional complication is introduced by the inhomogenity of the
magnetic field, although for any given track, it can probably assumed
to be constant
|
Wed
| 11:00
| Topic
| J. Hrivnac: Access to data
|
References
| Slides: HTML,
PS,
PDF
|
Summary
|
- Data flow from Zebra tapes into Atlantis
- Paso calling sequence shown, this will create an XML file
- Atlantis allows to read from an XML file after preprocessing from
within Atlantis
- Will all be in next release
- Beginner's manual of Atlantis on the Web
|
Discussion
|
- Q: Is it possible to have more than one event in an XML file? A: No,
the files would become too large
|
Wed
| 11:25
| Topic
| M. Stavrianakou: Analysis tools
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- Web pages simplified; one link per project (JAS, LHC++, OpenScientist,
Root); event display (Graphics, Aravis, Atlantis, Persint, Wired);
pilot projects and evaluation exercises; developers' corner;
requirements and use cases; analysis and design; news, meetings,
presentations; interesting pages from other projects
- Requirements and use cases: initial requirements provided, discussed in
May workshop, further developed by Fabiola and Maya, physics community
invited to provide input via questionnaire. Aim: Get feed back on tools
and use cases
- 8 replies received; all made comments on the tools used; generic use
cases, physics use cases, general suggestions. Requirements being
extracted by Fabiola (see Web)
- Requirements: At least the same functionality as Paw, but better
minimisation and fitting, better interactivity for graphics; ability
to use all reconstruction algorithms, and HEP libraries, from within
the analysis tools; access to N-tuples, hierarchical structure of
stored information; scripting interface including command history as
well as GUI; event display and event browser. Questions: Does the tool
need to have hooks to interactive simulation and reconstruction? access
to data base? Tool should be simple, documented, user-friendly, come
with a debugging facility, portable, and free
- Plans: Compile and complete the requirements, review, submit to tool
authors together with generic use cases; evaluation exercise. No
long-term recommendations expected at this point in time
- Outlook: continue ongoing pilot projects and evaluation exercises (eg.
CBNT with Objectivity/LHC++), participate in AIDA, evaluate IGUANA
|
Discussion
|
- Q: How many different GUIs do we have? It is a problem to get used to
many different ones. A: At least one per tool: Iris Explorer, Jas,
Root, Iguana, Paw, ...
- Q: What do we actually mean by evaluating analysis tools? We need to
address the integration with the Atlas software strategy and overall
architecture, to consider potential extensions etc. A: We indeed need
to establish collaboration with the upcoming Architecture Team. The
evaluation is not intended to be a shoot-out, and is expected to be
going on for a while
- Some requirements are already fairly clear, although certainly
incomplete; they are already imposing constraints on the tool(s) to
be chosen
- Analysis tools must not talk to persistent objects on the data base
directly, otherwise they would violate the persistent-transient
separation principle
- Iguana may be interesting in terms of its functionality, but its design
is strongly based on the assumption of Objectivity/DB as the data base
management system
- Could be useful to have a hands-on workshop
|
Wed
| 11:55
| Topic
| J. Apostolakis: Geant4
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- G4 kernel: powerful kernel, extensive and transparent physics models,
largely surpasses Geant3
- Current release 4.0.1 made in July 1999, offering support for STL and
RogueWave Tools.h++. Few additions (low energy em processes). 1 bug
fix patch
- Geant4 1.0 to be released next week, including new hadronic physics
models, low energy em processes, "energy loss plus" ionisation
processes, all available problem fixes. Requires STL, supported on
HP, Sun, DEC, Linux (RedHat 5.1), WNT. Later: RedHat 6, Sun CC 5, SGI.
Dropped AIX
- Will move towards ISO C++, allowing for compilation with namespaces.
Development tag will be put in place in two weeks' time
- Regular G4 tags: proposal to publish one reference cvs tag per month to
G4 members, including tested problem fixes, improvements, and tested
new developments. Experiments to distribute them internally
- G4 1.0: Neutron isotope production model, multi-fragmentation,
additional string model, new cross section classes, refinements and
improvements to em physics models
- Overview over hadronic processes existing in G4
- User support model: support guaranteed for all G4 members, best-effort
basis for the rest of the world; experiments and institutes to support
each other, each one in the area of their expertise; simple problems
should be resolved on as local a level as possible
- Problem reports on crashes, loops, discovered fixes... via Web based
problem reporting system; differences from expected results to be
discussed with the G4 experts on the matter. Enhancement requests to
go via Technical Steering Board representative (Andrea Dell'Acqua for
Atlas)
- Production release in use by at least five experiments, first
confirmation of G4 strength in em processes, hadronic processes,
geometry. Geometry can get memory-expensive for complex volumes
- There are clearly memory/performance problems for some geometries.
A new parameter ('smartlessness') has been introduced which
should help to some extent (factor 2?). Work is in progress on
the use of Boolean solids, as tracking can be slow in their present
implementation. So, there are limitations/problems with some aspects
of the Geometry which need work, and these are being or will be
addressed
|
Discussion
|
- Q: What is the G4 persistency mechanism? A: The G4 persistency
mechanism (using Objy) is an option which is there to help; of course
users do not have to use it
- Q: Which Objy? A: 5.1 and 5.2
- Q: Is there a Help Desk (or similar) for physics problems? A: G4
problems are dealt with in various ways: through the experiment's TSB
contact(s), or by direct contact with the G4 experts. It is the
intention that problems are 'escalated' within the G4 collaboration as
necessary
- Q: It's good to hear of the progress in Boolean solids. Are such
solids visualisable? A: Yes, to some extent
- Q: What about alternative ways ("assembly"?) to describe complex
geometries? A: Possible, but there's always the tracking speed/memory
trade-off
- Q: Is there an RPM for Linux (Redhat 6.0)? A: Not yet
- Q: Sometimes there are problems because the G4 expert(s) are not
using the same version as users. A: This can happen. Will try to deal
with this
- Q: There are some dE/dx problems associated with materials which are
mixtures and choice of dE/dx method. A: Yes... in some cases a
'work-around' solution is required at the moment
- Q: When will G4 move to RHat 6.1? A: Don't know yet
- Q: Is the "dE/dx plus" method tested? A: Yes, and optimised!
- Q: Can one select the "parametrisation option" for some volumes but
not others? A: Yes, this feature is already supported
- (N. McCubbin)Thanks to John for coming to talk. ATLAS is looking
forward to a continuing and developing collaboration with G4
|
Fri
| 09:00
| Topic
| D. Barberis: Quality control group
|
References
| Slides: HTML,
PS,
PDF
|
Summary
|
- Report given to EB on 19 November 1999
- Mandate: evolve requirements concerning quality control, minimising
overhead
- Composition: variety of backgrounds, experience, interests, ...
- Software onion model (controversial point): kernel software in the
center, around that domain or detector specific packages, on the
outside individual analysis software. The closer a software component
is to the center, the higher the requirements. Differences expected to
be lifted with time
- Long-term target: good design, hence inspections and reviews; good
code, hence testing procedures; good documentation. Some requirements
on coding rules and design documentation relaxed in transition phase
for non-core software. User guides mandatory
- Coding conventions: Take Spider as starting point, classify by
importance, looking for good and bad examples. Define applicability
of each rule as function of location of code in onion model
- Software ownership: each package belongs to a working group; WG reviews
and approves design and implementation, checks the quality; QC group
advises and coordinates working groups
- Software Process: long term, the full documentation will be required
(design document, user guide, reference manual). Short term: user
guide, describing at least the interfaces. Validation by combination of
inspections, walk-throughs, reviews, tests organised by working groups
- Preparing a document on coding standards with examples
- Evolutionary process
|
Discussion
|
- Should insist in design documentation, that is what people would like
to discuss
- Q: A lot of software is not encapsulated in domains; how to ensure
consistent interfaces? A: This is the business of the Architecture
Team
- Q: There is a problem leaving the quality control just with the working
groups. A: The quality control group is there to give advice, not as
a police force. The strategy is to raise the awareness and
responsibility in the working groups
- Experience in the past has shown that external (to the working group)
reviewers are very helpful
- The understanding of what 'quality' means differs widely in Atlas. A
clear definition is required. Does it include that packages are
understandable by people other than the author? The coding rules by
themselves would not guarantee this understandibiity
- Q: Has the quality control group considered testing? A: There are
two aspects: performance (CPU time) and result. This will typically
again need to be addressed by the respective working groups. It should
be seen in the context with reviews
- Q: How do we ensure that documentation remains in sync with code? A:
This needs to be addressed. Putting documentation source into cvs with
the rest of the package will help a little; also there are cvs
ChangeLog files and descriptions of cvs tags which can help
- Q: The idea of ownership of a package is liked, but it requires that
the respective working group assumes full responsibility. The package
must be understandable at least to the working group. A: The boundaries
between domains must be understood clearly mainly because of the
responsibilities
- There are two aspects in keeping the doc and the code in sync, a
technical one and a psychological one. Work is ongoing to make it
as easy as possible technically
- Performance in terms of speed is very important. The current simulation
is unacceptably slow. Speed is particularly crucial for the Event
Filter
- Diagrams are a very important part of the documentation, and help the
understanding considerably, in particular class diagrams and Object
Interaction Graphs
|
Fri
| 09:35
| Topic
| J. Hrivnac: Graphics WG
|
References
| Slides (Graphics WG report):
PS,
PDF
Slides (Java in HEP):
HTML,
PS,
PDF
Minutes of Graphics meeting: HTML
|
Summary
|
- Graphics covered quite a lot of times this week
- New design: Action plan agreed, walkthrough
- ATF doesn't cover graphics, some conflicts with detailed designs
presented in ATF report. Architecture of visualisation must be
considered by Architecture Team as part of the common framework
architecture
- Scenes: Atlantis: ID data available; Aravis: to be migrated to Qt;
HBookTuple: problems with Slug will be addressed
- Plottables: Space points of ID available, will work on muons next
- XMLscene: used for event (Atlantis, Wired), detector description,
histograms
- Documentation: mostly available on the Web
- Wired: now has space points, tracks in perigee and vertex formats,
collections of tracks to improve speed
- Wired internally using InfoBus, being enabled as a Jas plugin
- Common Java projects in HEP: All projects agreed on common
organisation, namespaces, ... cvs server being set up
- Colt: library for high performance scientific and technical computing
in Java
- GraXML: visualisation in 3D, implemented in Java, displaying the AGDD
data. Found Java very easy to use, adding command line interface was
trivial (BeanShell)
|
Discussion
|
- Concerning the Java namespaces, is there need for a concerted action?
Should we try and ask David Williams to act on it?
- Q: What are the graphics capabilities of GraXML? A: It uses either
VRML or Java3D. The example shown was based on VRML
- Q: Are the proposed tools and libraries available free of charge? A:
Yes, and for the most part, they are well supported
- Q: Are people interested in a revamped Slug version to be released on
HP first, or should all expected releases be available at the same
time? A: Release HP version as soon as it is available
- Q: Is Qt just another alternative implementation to Motif or Lesstif?
A: No, not at all. Qt provides different functionality, and is fully
object oriented. The API is much different
|
Fri
| 09:55
| Topic
| D. Rousseau: Reconstruction WG
|
References
| Slides: HTML,
PS,
PDF
Reconstruction meeting: Agenda and transparencies,
minutes
|
Summary
|
- Many new people, deficiencies in documentation identified
- Atrecon: needs to be maintained for some time. Big monolithic
library. Testing of cvs production version going ahead. Split up
into different packages? Implications must be understood
- ID: 3D spacepoints implemented in Paso, to be used in Graphics,
iPatRec, xKalman. Progress in putting iPatRec into Paso, involving
re-design of iPatRec front-end, first release to be expected in
January 2000
- After integration in Paso, iPatRec will be able to run without
external seeds; will try with inhomogeneous magnetic field
- xKalman++ also using inhomogeneous magnetic field, new strategies
implemented; standard strategy 3 times faster than with Fortran
version. Integration into repository being prepared
- LAr: current Fortran algorithm is documented, new design going on
- Tile: test beam in Objy, code review in progress. Tile digits being
added to event
- Muons: Muonbox being adapted to CSCs replacing MDTs in high rate
region, problem with reconstruction when em shower accompanies muon.
Muonbox will be wrapped. Amber (pure OO design) to be ported to Unix
(currently NT). Cobra delayed by bug in Geant
- Combined reconstruction N-tuples in Objectivity: being worked on, Web
page exists. Conversion being done manually. Starting point for
analysis tools study, getting physics performance people into C++, ...
- Reconstruction entity list: being established, iterative process.
Will serve as input to design
- Near future: test of Paso: all single detector reconstruction
being adapted. Combined reconstruction groups want to get on board.
Jets/Etmiss hit by N-tuple limit of 50000 words
- Longer term: re-design of whole software, but depends on Architecture
Team. Close interaction necessary
|
Discussion
|
- Q: Can we envisage to modularise the ID reconstruction algorithms? We
may want to exchange the road finding in both packages, for example.
A: That is clearly the design goal
- Q: Does the TDR reconstruction code comprise C++ elements? A: Yes,
parts of iPatRec for example
- Q: Should we not be worried about changes in the C++ compilers as
they move towards ISO C++ standard compliance? A: This is not expected
to be a large problem, as the C++ code worked fine on a significant
number of platforms
- Q: What is an entity? A: The word has been chosen in order to
denominate a high-level object, without the implication of objects in
the C++ implementation. Entities have attributes and operations,
though
|
Fri
| 10:50
| Topic
| A. Dell'Acqua: Detector simulation and Geant4
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- Meeting well attended, should be longer next time
- Critical mass of people reached, working on all parts of the Atlas
detector, will probably reveal many problems in Geant4
- Activities concentrating on geometries, need to start evaluating
a broad spectrum of physics asap
- Training courses almost complete, about 100 people went through
- Chaos frozen, waiting for an architecture
- Highlights: complete geometry of Accordion, detector description in
XML with G4 geometry builder, complete chain for muons AMDB - Geant4 -
Muonbox - Persint realised
- Geometry: placements work well, but heavy for large number of volumes;
parameterised volumes desastrous for tracking performance. Boolean
solids had many bugs, no graphics, slow. Not tried BREPS yet. Answer
is probably a combination of various techniques; performance is very
important
- Automatic translation from XML to G4 geometry nice for rapid
implementation of different geometries, but makes heavy use of boolean
solids. Final geometry to be built by hand
- Physics: some problems demonstrated in em processes - didn't even look
at hadronics. Slow down, understand issues with the Geant4 developers
- Working group of Atlas volunteers proposed - regular meetings with a
clear program of work, evaluating the physics in Geant4, and
collaborating with Geant4 developers
- Some basic utilities present in Geant3 are still missing from Geant4
(user interface, list of materials, ...)
- Patch policy of Geant4 is a problem - forces us to maintain our own
version. G4 repository must be readable. Revised G4 policy does not
seem to improve things
- First priority for evaluation of the physics in Geant4; testbeams are
the next challenge. Collaboration with G4
|
Discussion
|
- Q: Materials should be included and connected to the detector
description. A: Yes, more work is needed in this area
- Q: How well are users integrated into the SRT/cvs environment? A: Not
yet... effort in this direction is planned for the near future
- There is a Geant4 geometry editor, which is a Java program for building
geometries, creating C++ source code, allowing for persistency, ...
|
Fri
| 11:25
| Topic
| I. Hinchliffe: Software aspects of Monte Carlo
generators
|
References
| Slides: HTML,
PS,
PDF
|
Summary
|
- Generator interface: C++ program to be used for exchange of information
between generators and simulation, can merge inputs from different
generators
- Isagen, enables ISAJET to be used as input for Dice
- B decay packages: Pythia used so far, but inadequate. QQ used by CDF.
Area evolving with collaboration of many experiments
- Minimum bias: critical to Atlas, large differences in current models.
Maintain repository of events and settings
- Integration of parton MC into showering MC - done in the past, but must
be done more systematically
|
Fri
| 11:35
| Topic
| H. Meinhard: Summary
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- (see slides, I'm not going to summarise the summary)
|
Discussion
|
- (Isajet comment)
- In the summary of the Geant4 talk, it should have been mentioned that
there is a large variety of hadronic models, with a significant program
to verify these models against real data and other simulations
|
Fri
| 12:10
| Topic
| N. McCubbin: Conclusions, closing remarks
|
References
| Slides: HTML,
PS,
PDF,
PPT
|
Summary
|
- Interesting, but exhausting, week
- ATF and future work: input from community very valuable. Obvious that
Architecture Team will interact with all parties concerned. Sorting
out precise composition not entirely straightforward
- Encouraged, but not complacent, about the spirit of progress during
this workshop
- The PLAN: will need a strong move until next software workshop;
valuable input from some systems already available
- Platform discussion in Focus: Linux on Intel is the mainstream
direction, but need for one more system is recognised
- Next SW week: 14 - 18 February 2000; should we consider fewer meetings
and more workshop?
|
Discussion
|
- Q: Architecture Team needs to be formed very soon, broad consultation
required. A: Discussions are ongoing
- Weird that IT gets re-structured just before the CERN review of LHC
computing
- Idea of topical workshops not appreciated
- Strict timekeeping is very good for people who only wish to attend
certain talks
|