Summary |
- Tools: Dependency grapher (by J. Hrivnac) to depict dependencies between
packages; Cvsweb, Bonsai, Light: code documentation with partly overlapping
functionality, need to understand status of Light 2 and define long-term
strategy
- Reviews and ASP: six reviews requested recently, three going on. How to
accelerate, proceed to code review, get more people involved?
- Proposal to re-consider review procedure, maintaining objective of
controlled high-quality software; how formal should a design/code report
be? How formal should review procedure be?
- Possible solutions: Walkthroughs (with a written report); allow for simpler
and less formal design documents; allow for code reviews without design
documents (???); offer help for authoring design documents
- Concerns: How to ensure consistent, easily readable high quality
documentation? How to do walkthroughs?
- Feedback from community required
- ASP: New version of working document, open for comments until December 19th
- Round of domains: ID: Preparing for implementation of detector description
data base, testing common C++ clustering, setting up for simulation work
- Muon: Moving to cvs/srt, effort going on on muon identification and track
fits in B field
- Reconstruction: Large efforts on combined reconstruction. Astra future
being discussed. Xkalman++ to be put into repository, but documents to be
provided. Successfully testing on NT. Track class drafted, more
documentation needed
- Magnetic field: Abstract base classes exist, services for derivatives as
well. Need to check what Geant4 provides in terms of B field and
intersections of tracks with volumes
- Graphics: Wired implementation ready to go, where to store the Wired
kernel? Suffering from a CLHEP bug on DEC. Ready to implement visualisation
for systems, asking for help. Suggested to distinguish between "online"
and "offline" scenes - histogramming are scenes
- Control: Working on integrating the LAr prototype, trying to read events
from Zebra tapes as starting point for reconstruction. Will start from a
fresh copy of Arve and implement component control prototype
- Analysis tools: Awaiting user requirements. LHC++ components being
evaluated in isolation
- Detector description: Improvements for muon, ID next on the list (required
until April 99). Will consider XML only later
- Domains, DIG working: Agreed to create four new domains for combined
reconstruction (e/gamma, Jets/Emiss, b tagging, muon identification). Will
try and have a weekly 1/2 hour on DIG matters, most likely before the
weekly software meeting, videoconferenced
- Need clear description of policies about cvs usage, releases, prerequisites
for development: try and incorporate that into the introduction to the ASP
Discussion
|
- For the review procedure, one should try and learn from the obviously
successful experience of Geant4
- Most developers seem to perceive the provision of formally correct design
documents as a burden
- Reviews are already requested too late - in many cases, development
is that much advanced that it is very difficult to incorporate the
feedback from the review. Producing the documents is felt to be too
heavy, as is the review itself. All three points need to be considered
- A possibility would be to only require informal documentation for the
design review/walkthrough, but ask for the formal design and code
documents for the code review. However, this implies that the design
is being written up before the code is developed, which appears often
not to be the case
- Walkthroughs have the important advantage of allowing for immediate
interaction between provider and "reviewers"
|
Tue |
10:15 |
Topic |
G. Poulard: Status of software for physics performance
TDR |
References |
Slides: HTML,
PPT,
PS |
Summary |
- Status of geometry description: stable since February 1998 for ID and
calorimeters, only new digitisation for tile added. Muons have changed a
lot since February, lots of dead material added as well as end caps. All
muon geometry changes done in CMZ
- Reconstruction: stable calorimeter code, ID with some improvements, muons
now taking dead material into account
- Combined reconstruction: Many new developments on e/gamma reconstruction,
conversions, muon identification (2 approaches using tables and GEANE)
- Still ongoing work: e/gamma, conversion, soft electrons, muon id, soft
muons, tilecal cells, vertexing, ...
- Combined N-tuple filled from RECB chain, contain ID tracks, jets and total
energy, missing Et, EM clusters and e/gamma id, tau id, ... Being tested:
e/gamma id in calo and tracks, conversions, tile cal cells, combined muon
fit, soft e id, soft mu id, primary vertex, b vertex
tag. Size currently ~ 20 kB for a typical two-jet event
- Cvs/srt migration: initially planned to only have cvs/srt, but new code
was often first released in CMZ. Hence new code was put into 98_2.
- Now everything in cvs/srt except for DICE, but no 'official' release of
all the software for the physics TDR yet. Release 0.0.15 is considered a
candidate
- Porting to NT and Linux going on; no big problem for Fortran code, but C++
code more involved. Decided to use the same conventions on NT as CERNLIB.
Code being ported to SGI
- Conclusion: code for physics TDR is in good shape, missing items well under
control
|
Discussion
|
- Full release of Linux wanted
- Q: Is there any recommendation for how to mix Fortran and C++? A: No
document yet, but could be provided
- Q: Is there any effort going on to convert the combined N-tuple to Objy
such that they can be used with LHC++? A: Yes, but it is not yet
available, since some restructuring of the data should be done
- Q: What can be combined in a single executable? A: All calorimetry,
iPatrec, Xkalman, muon, muon id; more modules not tried yet
|
Tue |
11:010 |
Topic |
T. Hansl-Kozanecki: Trigger simulation software |
References |
N/A
|
Summary |
- Task: development of selection algorithms for trigger L1, trigger L2
- Realistic simulation of hardware for L1, reconstruction-like processing
in RoI for L2
- Performance of algorithms, trigger menus
- High-level (L2, event filter)
- Tools: complementary packages: full detector simulation, partial event
simulation (ATrig, CTrig, modified xKalman); partial detector simulation,
full event simulation; analysis packages which are partly being moved to
ATrig
- Data exchange: ASCII files containing detector geometry, information per
event (L1 RoIs, Hits per L2 RoI, L2 ATrig results). Input to CTrig
- ATrig: Modular structure, constants stored in Zebra TZ package, naming
conventions
- All software maintained in cvs/srt
- Description of sample algorithms and their performance
- Tool exists which can be used conveniently, thinking how to best evolve to
C++. Help from experts requested and given. Serious lack of manpower, most
modules being worked on by single person
|
Tue |
11:50 |
Topic |
S. Gonzalez: Environment for ATrig jobs |
References |
Slides: PDF
|
Summary |
- All ingredients for running ATrig successfully are in the repository,
including scripts and documentation
- Perl scripts have been developed for setting up the environment for, and
running, ATrig which can otherwise be confusing for a newcomer
|
Discussion
|
- These scripts could be interesting for setting up remote production
jobs
|
Tue |
12:00 |
Topic |
M. Sessler: C++ implementation in reference software
|
References
| Text slides: HTML,
PPT
Profiling results: PS
Class diagram for SCT preparation: EPS
Class diagram for TRT implementation: EPS
|
Summary |
- Task of reference software: Performance studies, benchmarking
- Input: ASCII, interface to Zebra, Objectivity (later)
- Steering, algorithms in C++
- Parameters and data file names stored in configuration file
- Example implementation: TRT; classes similar to offline reconstruction
- Performance comparison between C implementation (running on 400 MHz
Alpha/DUX) and C++ implementation (running on 300 MHz PII/Linux):
Factor 1.7 in favour of C/Alpha/DUX, which can be understood in terms
of machine speed, redundant copying, known inefficiencies, and perhaps
a small additional C++ overhead
- SCT/Pixel: class diagram presented
- Being prepared: common framework, event display, ...
|
Discussion
|
- Doubts about 20% performance penalty due to C++
- Q: Do you consider neural network based trigger algorithms? A: They
are under study, but we are reluctant
- Q: What is the long-term strategy? A: Currently, trigger simulation is
somewhat between Online and Offline. There is time pressure from a
TDR due by end 99, but integration into offline software also aimed
for
|
Wed
| 09:00
| Topic
| H. Kurasige: CHAOS: Simulation for Atlas
|
References
| Slides: HTML,
PPT
|
Summary
|
- Comprehensive Atlas Object-Oriented Simulator
- Working group formed in November
- Based on Geant4, HepODBMS, Arve, aimed at providing environments and
tools for
describing detector geometry, physics events, detector responses,
digitisation, visualisation and histogramming
- Libraries and executables for full simulation and test beam setups
- Based on OO technologies
- First draft of URD:
http://home.cern.ch/~ada/geant4/URD.htm
- Parameters can be stored, exact reproducibility of results with same
set of parameters
- Three modules: Physics event generator, detector simulator,
digitiser simulator
- Class category diagram presented; Management, EventHandling,
Particles+Processes, PrimaryGenerator, Tracking, Material to be
provided by CHAOS team, as will be DetectorDescription, Visualisation,
GUI. DetectorConstruction, SensitiveDetector, Hits supposedly provided
by systems
- Detector construction category: Construct G4 geometrical objects with
real geometry, readout geometry
- Volume collisions will be easily detected, will also use CAD data
via the STEP reader to define envelopes for systems
- Milestones and schedules: Jan 99: First release of prototype framework
with Si/TileCal/TGC/BarrelMuon, in order to get system people started;
Feb 99: Geant4 tutorial, CHAOS WG meeting, some detector component
envelopes; May 99: Interface to detector description, comparison with
beam tests; July 99: First internal version of full geometry, all
sensitive detector; Dec 99: First collaboration-wide release, detailed
comparisons with G3
- Currently working on Web page, cvs repository, migration off Tools.h++
- Platforms initially supported: Sun, SGI, HP, Linux
|
Discussion
|
- Q: How many Arve modules will Chaos represent?
- Q: Is an ASP compliant design report under work? A: This might be a
good case for a walkthrough
- Q: To which extent is the first release considered a prototype to be
thrown away?
|
Wed
| 09:30
| Topic
| T. Burnett: Arve
|
References
| Slides: HTML,
PPT
|
Summary
|
- Class diagram of top-level Arve classes, formal design document exists
and has been reviewed
- Other users than Atlas: Glast, detector development for NLC at SLAC
- Recent changes: Graphics removed from geometry; inheritance based on
Atlas graphics model removed; cascading menus; new 2D display
capabilities
- Start a new project Arve (old one was called arve), leaving the
Gismo stuff behind; object network (component model) to be
incorporated; cleanup of control - graphics interface; port to more
platforms; implement iPatRec and other reconstruction modules,
try it as framework for CHAOS
- Aravis: GraphicsControl to be replaced by Aravis, to inherit from
AbstractScene; ArvePlottableRep to be replaced by AravisPlottableRep,
to inherit from PlottableRep
|
Discussion
|
- iPatRec implementation requires the interface to the detector
description (which is partly already available, such as for the TRT)
- Interfacing to LHC++ components should also be done
- Maybe it's better to implement iPatRec with the current control
structure, and migrate to object networks later
- Single particle creation is indeed needed
- Aim of Aravis changes is to make the interface the same as for the
other AbstractScene, which means that Aravis can be used anywhere, and
Arve can use any graphics package
|
Wed
| 10:00
| Topic
| J. Hrivnac: Graphics
|
References
| Slides: HTML,
PS,
Applix Presents
|
Summary
|
- More details in graphics meeting on Thursday afternoon
- Abstract scenes: categorised into EventDisplay, Histogram, and Misc,
partly coupled to application directly, partly by intermediate storage
- Domain specific graphics, graphics control (SceneList, ObjectBrowser)
- Event displays: Atlantis: installed, new menu, XML files, port to NT
will be difficult. Wired: will be installed soon. AVRML: done.
AtlVis (application of HepVis and OpenInventor): not started. Aravis:
new design under review, new code being prepared
- Histograms: AHisto/LHC++: works, more progress depending on LHC++.
Root: Proof of existence, but requires design compromises on our side.
HippoDraw, JAS: will be looked at
- Miscellaneous: XML: basic implementation exists, being improved,
library being written. AsciiText: done. JavaScene: prepared, but not
currently maintained
- Data: ID: done, need improvement; no clusters, tracks etc. yet.
Simulation: done, need improvement; Muon: will be done; LAr: no
manpower. Reconstruction: objects not yet defined, will be done.
MagneticField: to be done. Trigger simulation: being worked on
- Control: SceneList: done and available. ObjectBrowser awaiting
availability of manpower
- Technical support: Implementation guidelines document,
CreatePlottableModel script, DependencyGrapher
- Applications: Atlantis, Wired waiting for useful data including
well defined objects of reconstructed entities. GraphicsFromEvent
(use any event on any Zebra tape) available
- Should there be an abstract 3D scene? Should there be an abstract
histogram? Rather go for self-similar structure. Design under work,
to be reviewed (or walked through) soon
- Reverse operation, (G)UI: new ideas, need more discussion
- Concept of level of detail for AbstractScenes introduced as well as
UnavailableScenes; available for AVRML, AHisto (at compile time now,
later at run time)
- Not happy with some names (AbstractScene, Plottable, PlottableModel,
PlottableRep)
|
Discussion
|
- Q: For the renaming, should namespaces be considered? A: Yes, as soon
as namespace usage is allowed
- Q: What about planning, milestones, manpower? A: Problem is that
graphics is dependent on other domains (track class etc). A detailed
list of milestones is on the Web. Severe manpower problems in some
areas, not all of which are considered critical
|
Wed
| 11.00
| Topic
| M. Stavrianakou: Status of MC productions
|
References
| Slides: HTML,
PPT
|
Summary
|
- Simulation productions mostly finished, with the exception of
Higgs to two photons, Higgs to four leptons, MSSM Higgs, Heavy
Higgs
- About 1 million events (of 1...2 MB each) fully simulated
- Reconstruction productions: another production scale effort, awaiting
finalisation of combined reconstruction code
- Conclusions to be drawn: Production statistics required (number of
events, size and CPU, manpower, ...); production coordination to be
improved (physics requirements and priorities, technical management,
running and monitoring), corresponding allocation of manpower;
testing of production software is really indispensible, need to go for
a proper, formal, repeatable testing procedure for future productions;
production to be automated more, and to be documented well
|
Discussion
|
- Q: Would the bookkeeping information be put into a real data base, and
be made available publicly? A: Yes, provided the manpower can be found
- We should look at possibilities to collaborate with other experiments
- Data transfer to and from CERN also needs improvements, automated
checking scripts, ...
- Suggestion to provide a description of the jobs to be done to improve
the situation
|
Wed
| 11:20
| Topic
| M. Stavrianakou: Repository and releases
|
References
| Slides: HTML,
PPT
|
Summary
|
- Current physics TDR software routinely built on HP, DEC and IBM,
ATrig on HP and DEC
- 98_2 for HP, DEC and IBM, SGI (Boston), Linux (probably not complete)
- In repository now: TDR software (25 packages, mainly in F77 and Age),
domain software (15 packages, mainly in C++), contributed and external
- Lines of code: 505 k F77 (including Geant3), 86 k Age, 197 k C++
- Problem to reach full ASP compliance for all domain software in the
repository, overview over current status to be drawn up
- Supported platforms: Not really clear what "supported" means - it is
for the time being that the code compiles fine, and that it passes the
tests built into the repository
- HP (moving from aCC 1.12 to 1.18), DEC (moving from cxx 5 to cxx 6),
IBM; coming soon: Linux (RedHat 5.1 with egcs 1.1), Solaris
(2.5.1 with SWpro 4.2); coming later: SGI, WNT
- Releases: Migrated to SRT 0.3.0 with release 0.0.14, soon to be moved
to 0.3.1; all these releases to be considered developer releases, not
for production. Releases have been done roughly bi-weekly on three
platforms; doesn't seem realistic to have seven platforms be built
weekly. Help from outside institutes? Converge on few "most critical"
platforms?
- Nightly builds: to facilitate developers' work, monitor repository
'sanity', speed up weekly releases. Works on head versions now,
considering to introduce a special tag; build logs are compressed into
a readable format. Automatic reports of failures to developers are
difficult (packages fail if one of their dependencies failed)
- Possible speedups of releases: build on local disks and/or dedicated
machines; don't rebuild unchanged code; build on fastest platform
first; profit from nightly build for early debugging; automate the
release step
- Plans for production releases of TDR software on all supported
platforms
- Repository tools: browsers (cvsweb, Bonsai, Light), automatic list of
package coordinators, dependency grapher
- Quality assurance: Some tools integrated in SRT (CodeCheck,
Insure++, Logiscope); policy needed as to who should apply these
tools when; metrics collection; promote or enforce ChangeLog policy;
collect and possibly suppress build warnings, code inspection
- Spider subprojects: Coding conventions and CodeCheck; common SRT
(to be discussed further on Thursday); other software configuration
management issues (tutorial to be prepared, draft of SCM plan);
manpower and sharing of responsibilities
|
Discussion
|
- For Solaris, should go to 2.6 right away, to be checked with RD
Schaffer
- For Linux, the policy is to take the egcs compiler version which is
in production on Asis
- SRT 0.3.1 is a bug fix release, backward-compatible
- If a particular version of SRT is required, it should be the default
for the current release
- Discussion about the production release of the TDR software, majority
tends to prefer to release a subset of the repository
- One should consider to in any case not rebuild packages which have not
changed
- Q: What about NT? A: Since only one institute is interested, they
should provide the support. Not considered a high priority item
- We should encourage IT to consider migrating at least part of pcsf to
Linux
- Light (Ada Farilla): Some work is required
to make it work correctly, but manpower is very scarce. All
automatation that can be done on the Atlas side has been done. We need
to understand the further evolution of all related tools
|
Thu
| 09:00
| Topic
| J. Knobloch: LCB issues
|
References
| Slides: HTML,
PPT
|
Summary
|
- LCB projects: Geant4, ODBMS, LHC++, Videoconferencing,
Monarc, PIE, Event filter farms, Spider
- To be covered in next week's session: Geant4, Videoconferencing,
Monarc, event filter farms, LHC++, RD45
- Geant4: Deliverables: URD, introduction, installation guide,
application developer manual, took=lkit developer manual, physics
reference manual, software reference manual, examples, source code,
MoU. Latter signed by Atlas, CMS, CERN; more signatures expected
fairly soon
- Event filter farms: Joint Atlas/CMS/IT project, Alice and LHCB
observing. Project leader: Frederic Hemmer. Resources: 4.5 FTE,
65 kCHF. Tasks: Farm architecture and management, application
management, farm computing. Motivated by fact that needs and
requirements of the experiments are very much in common
- Monarc: Main deliverables: Validated set of simulation tools, set of
baseline models, testbed, definition of regional centre architecture
and functionality. Pep presented at October LCB, referee report
expected, decision next week. Spokesman: Harvey Newman, Project leader:
Laura Perini
- Videoconferencing project phase 1: VRVS now well established, gateway
to Codec, Codec kits for PCs, MCU, equip conference rooms, recording
and playback
- Phase 2 (18 months from mid April 1999): Packet (local virtual rooms,
sharing tools, low capacity links), improved booking, new rooms.
Resources: 49 man months, 236 kCHF
- LHC++, RD45: User evaluation and feedback. Need for flexibility in
replacing components. Alice is said to have decided not to use LHC++
analysis tools. CLHEP: Atlas will ask for more frequent releases,
check feedback on requests concerning Explorer, scripting language
|
Discussion
|
- Q: Will Monarc define the computing models? A: No, a full set will be
evaluated from which can be chosen
- Q: Does Microsoft NetMeeting fit into this picture? A: The aim is to
be platform-independent. At present, there is no place where it can fit
in
- Flexibility of replacing components can be very hard; perhaps a small
interface layer, HEP specific, can help
- Q: What replacement solutions are being looked at? A: Tools.h++,
Explorer, HistOOgram
|
Thu
| 09:35
| Topic
| S. Fisher: Spider project
|
References
| Slides: HTML,
PPT
|
Summary
|
- Objectives: define, implement, and deploy a process based, modular and
integrated software development environment for the LHC experiments and
projects
- Chosen topics: Programming (C++ coding standards), software testing,
SRT
- Coding standards: based on standards of future experiments. Hard to
reach a compromise. For some classes, agreement reached on rules
and guidelines. CodeCheck rules written (based on Atlas work)
- Advantages and disadvantages for Atlas - need discussion in Atlas,
approval by the collaboration
- SRT: Collecting requirements of all experiments, to be merged into a
common document, evaluate existing solutions
- Benefits for Atlas: long-term maintenance, exchange of packages between
collaborations; cost: >= 1 FTE for some time, customisation
- Hidden cost of Spider: Other IPT activities suffer such as Light,
tool support, Wired
- Conclusions: Spider (at present) offers little benefits; it is rather
costly both directly and indirectly
|
Discussion
|
- Would be nice if other projects would not need to suffer that much
- Strategy is to start off with SRT, and add more subprojects later
- Q: What Atlas names are on the Spider list? Has Atlas officially
endorsed the project? A: No endorsement yet, will be discussed in EB
this week. Involved: Jürgen Knobloch, Maya Stavrianakou,
Steve Fisher
- Common solutions are always a benefit to the community, can also
support common projects
- Always problems with our version of SRT, we should not insist too
much on it. Much more positive experience with CDF SRT. Better
communication needed
|
Thu
| 10:05
| Topic
| M. Stavrianakou: Analysis tools
|
References
| Slides: HTML,
PPT
|
Summary
|
- "Ultimate" analysis environment: Interactive selection of histogram
area, view/change selection criteria, reprocess selected events, ...
- Assumptions, policies, constraints: User requirements with Use cases,
ASP to be followed for analysis tools. Software requirements to be
collected as well, interfaces to other domains (database, simulation,
reconstruction, graphics, "physics" etc.); get on with current choices
- Ongoing or planned projects: Atlfast++: C++ re-incarnation of Atlfast,
combined reconstruction N-tuple (Objy/LHC++ and Root) with realistic
analysis scenarios, 1 TB database and analysis tools, ...
- Evaluation of LHC++ components: HEPODBMS data base, histogramming
packages (HistOOgrams, HTL), minimisation and fitting (GEMINI and
HEPFitting), Interactive analysis (HEPExplorer), graphics and
visualisation (HEPInventor, HEPExplorer, scripting (SWIG compliant
interface to variety of scripting languages)
- Outstanding/controversial issues: Tool for interactive analysis and
visualisation. Explorer lacking some functionality and power. Atlas
evaluation to start when user requirements and manpower available;
may need alternative to Explorer in LHC++ framework. Java based tools
increasingly popular. Contingency plan?
- Alternative, intelligent histogram storage: to allow plotting and
operation on histograms with interactive tools, for stand-alone,
portable, faster analysis on "dead" histograms, with possible feedback
into data base
- Availability and releases: Supported platforms (Linux!), release
frequency, licensing, distribution outside CERN; end user involvement,
also outside CERN; ASP compliance to be achieved; manpower problem
(currently 1.2 FTE, to go down soon; needed: 3 FTE)
- Milestones: User requirements (12/98 - seem to be missed), evaluation
and recommendation for interim solutions (6...8 months following
user requirements capture); more detailed milestones to be defined
- Web page to be created, mailing list?
|
Discussion
|
- Training for end users is significant, lifetime of packages should be
in a reasonable ratio with that
- Decision for tool set to use will be taken two years before start of
data taking
- Need an interim recommendation by April 99, because of the test beams
- Q: What is the foreseen release frequency? A: At least once a year.
CDs for easy installation being prepared, some elements may need more
frequent releases. Again 99 release required by April
- Contingency should be foreseen anyway
- Some of the architectural design and implementation fulfilling existing
requirements already available in the Graphics domain. Some
disagreement, needs discussion in DIG
|
Thu
| 11:10
| Topic
| J. Knobloch: Training
|
References
| Slides: HTML,
PPT
|
Summary
|
- Quite some training being offered at CERN by Educational Services,
but only useful for people based at CERN
- Question: Whom should we address?
- Existing courses: Basic software engineering, methods, tools and SDE,
...
- Need to give training in a distributed way: send trainers, record
courses and tutorials, videoconferencing and multimedia,
multiplicators
- CTP: Foresees to establish a training plan; need responsible who does
it. CERN (Mick Storr) willing to help, and to work on technologies,
exploit collaborative tools; profit from IT/IPT, Monarc, exploit
possibilities to collaborate with other collaboration
|
Discussion
|
- IT/US also offers training (tutorials, bookshop)
- Training on demand
- "Help desk"
- Discussion of interested people on Friday 14:00 h
|
Thu
| 11:30
| Topic
| RD Schaffer: Data base, event, 1 TB milestone
|
References
| N/A
|
Summary
|
- Trying to get digits in to Objectivity. Between 1 and 2 TB are
available on tape (hits and digits). Still missing: tiles and forward
calorimeter, parts of muon digits. ID and LAr fine
- Procedure: read digits, build event, write it into Objectivity
- Using BaBar like converters from transient to persistent and vice versa
- Much time taken to set up the executables, suffering from hardware
problems
- Efficiency to be improved, more changes to be expected
- Writing 1 TB will take roughly a week (3 MB/s), HPSS can cope with
that
|
Discussion
|
- Q: Can the same application be used to read Zebra tapes and
Objectivity? A: In principle yes
- Q: Is the data base physically residing at CERN exclusively? A: For the
moment, yes. Distributed data bases have been tried, though (Tilecal
data to LBL, G... project of CMS between CERN and Caltech)
- Q: Will it be possible for users to write into the data base? A: Rather
not for the 1 TB milestone, this should come later
- Q: How much have you been writing? A: Several GB, not a large fraction
yet
- Q: Is there any coupling to LHC++ components yet? A: No, not yet
- Separation of transient and persistent model offers much possibilities
for optimisation (compression etc.)
|
Fri
| 09:00
| Topic
| P. Mato: LHCB software architecture GAUDI
|
References
| Slides: PDF
|
Summary
|
- LHCB is a young experiment, recently approved
- Aiming for framework to be used in all event data processing
applications: high level trigger, simulation, reconstruction, analysis
- Keep in mind reuse, fact that most software will be written by
physicists. Low coupling between components
- Major milestones every two years, first one mid 2000: Migration to OO
software, retirement of old software
- Plan to go in short cycles (2...3 months) of incremental implementation
and releases; feedback from users at each stage, set priorities
- Major design criteria: Clear separation between data and algorithms;
types of data: event data, detector data, statistical data. Clear
separation between persistent and transient data; data store centered
architectural style, user code encapsulated in few specific places:
algorithms and converters. All components with well defined interfaces,
as generic as possible. Aim for re-use as much as possible
- Object diagram with the three data types, algorithms, application
manager, services
- More to be considered: Event data browsers, event displays, histogram
displays, detector database editors and tools, property editors for
algorithms, user interfaces etc.
- Classification of classes: Application managers, services, algorithms,
converters, selectors, event/detector data, utility classes
- Class diagrams for services: several interfaces per service, all
services inheriting from common base class. Clients only see the
interface. Same for algorithms. This strategy is compatible with Java,
DCOM, ...
- Transient event store: Algorithm interacts with event data service,
which in turn interacts with the persistency service. Hierarchical
event model
- Algorithms are data driven, can be combined
- Transient data model: identifiable objects should be sizeable because
of their overhead; eg. EcalHits are identifiable, but not the
individual hit. Hierarchical storage, but some cross-links necessary
as well
- Transient to persistent conversion: The converter is the only component
which knows about both the persistent and transient model
- Detector data representation: Transient detector store coupled via
different modules to persistent detector data, Geant 4 description, ...
- Detector description: includes detector structure, geometry and
positions, mapping of detector cells to electronic channels, detector
control data, calibration and alignment data. Transient store contains
a snapshot of the detector data valid for current event
- Links between event and detector: direct associations in the transient
model, logical identifiers in the persistent model
- Application configuration: JobOptions; algorithm/service properties
data base; detector data base; write specific code; user interface
components
|
Discussion
|
- Q: Data base very essential; model for protecting it? A: The central
function is held by the transient model
- Q: Who will write the converters between transient and persistent
model? A: Also physicists will do this. Standard base classes and
templates will be provided
- Q: Run time configuration is a big step forward, but for analysis
even more flexibility required. A: From the architectural point of
view, there is no problem. Foreseen to implement that in the user
interface
- Q: What are the requirements on the permanent storage? A: No particular
ones from the architectural point of view, for the time being -
currently Zebra is being used. OO data bases would however simplify
the converters a lot
- Q: How can one navigate within the data of one event? A: No final
decision about that yet
- Q: What is the reaction of the collaboration? A: They go along with
these ideas, and with the plan for incremental implementation
|
Fri
| 10:15
| Topic
| G. Poulard: Report from reconstruction meeting
|
References
| Slides: HTML,
PPT,
PS
|
Summary
|
- 36 participants
- TDR software: discussion about combined reconstruction, combined
N-tuple. Now completely under cvs/srt
- Muon id with Geane: some problems to be solved
- For b tagging, more work needed, deadline of mid January
- Track fitting in B field (Kors, Patrick): requirements, design report
on the Web, not for review yet. Classes: MagneticField,
TrackingInMagneticField, Interceptor, LocalTrackParameters. Plans: load
map from Objy, implementation of step size, re-engineer latest code,
finalise design document, relation with Track. Discussion about release
date, irregular grid, accuracy in regions of large field variations
- Muon reconstruction: expected to interface to Arve by April 99, same
for amdb++, Hit++, magnetic field
- ID: iPatRec still evolving, replacing Fortran by OO C++, effort towards
combined reconstruction, interfacing to Arve (June 99). XKalman++ with
capability to reconstruct in non-uniform field. Performance 20% slower
for uniform field, 50% slower for non-uniform field as compared with
Fortran version. Tests successful. Plans: interface to modular TRT
barrel geometry, interfacing to Arve, consolidation with other
reconstruction programs (common clustering etc.)
- SiCluster class: common C++ approach implemented, some discrepancies
found when running with iPatRec, to be investigated
- Track package: different types of tracks expressed by polymorphism,
exploit both aggregation and inheritance
- LAr calorimetry: Brookhaven getting started, drafting a URD (ready for
LAr week mid January). For design, will start from Lassi's prototype.
Available manpower not yet clear. Calorec++ incomplete, future unclear
- List of milestones (see slides)
- OO reconstruction by end 99 seems feasible
|
Discussion
|
- Q: What about the parameters of a track at the vertex? A: They are
being provided
- Q: What does 'integration into Arve' mean? A: Geometry needs to be
available, access to Geant3 data; clearer view needed
- Q: Interceptor class looks fairly closely related with the Gismo; what
is the strategy? A: Geometry and Gismo are decoupled
|
Fri
| 10:40
| Topic
| J. Hrivnac: Report from Graphics meeting
|
References
| Slides: HTML,
PS,
Applix Presents
|
Summary
|
- Atlantis: new menu bar, context sensitive help, command line interface;
muon detectors implemented, 'fish eye' projection in phi. Vertex
finding works, TRT, z vertex, XML reading
- Wired: using Swing now, detecor tree, Java shell, XML reading. Plan to
implement event chooser, bean shell, infobus, Java2D, Java3D. Manpower
critical
- Muon event display: for debugging muon code, displays volumes, tracks,
hits. Written in Fortran90/Higz, C++ interface can be provided if
necessary
- Aravis: real AbstractScene, design under review, code mostly exists.
Want to keep it small and simple
- XML: New parser to be introduced as external software, common interface
into Utilities
- Histogram packages: JAS, HippoDraw, Orca; browser vs full analysis.
All solutions being looked at. Want to make JAS, HippoDraw available
as applications, incorporate Orca into arve and re-design it
- Overall visualisation architecture? Steps towards definition
|
Discussion
|
- Q: What is the real strategy? What do we recommend people to use?
A: For event display, Atlantis and Wired are the main lines; for the
histogramming, more evaluation is needed. Probably more precise
recommendation by end 99 with first versions of simulation and
reconstruction
|
Fri
| 11:30
| Topic
| H. Meinhard: Summary of decisions
|
References
| Slides: HTML,
PPT,
PS
|
Summary
|
- Summary of workshop, DIG, and ACOS presentations and decisions as
recorded in these minutes
|
Discussion
|
- Problems with LHC++ installation at homelabs to be understood, people
with problems should send detailed reports
|
Fri
| 12:00
| Topic
| J. Knobloch: ACOS decisions, planning of next cycles
|
References
| Slides: HTML,
PPT
|
Summary
|
- Milestones: reconstruction, simulation, analysis tools, general,
event storage, infrastructure
|
Discussion
|
- Should publicise milestones on the Web, with pre-assigned release
numbers for each milestone
- Again discussion about NT support, to be followed up in a future
meeting
- Solaris: Machine at CERN being installed, build will be tried
- Linux: Compiler problem being investigated by IT experts
| |