Day
| Time
|
|
|
Tue
| 09:00
| Topic
| H. Neal, C. Uher, K. Sliwa: Opening remarks
|
References
| Slides: HTML,
PPT
|
Summary
|
- H. Neal: Welcome to all participants
- Ctirad Uher (Department chair): History: Invention of bubble chamber
at UM, many more important people have been working at UM
- Now HEP involved in CDF, D0, L3, Atlas
- H. Neal: Workshop originally foreseen in Boston
- UM had largest faculty contingent in SSC/SLD, hence loss of SSC had
major impact. Now increased efforts in Atlas: MDT construction, ...
- UM: home of Goudsmit, Uhlenbeck, .... renowned summer schools, first
g-2 measurements
- Conference details:
- Transportation from/to hotel: shuttle available
- Lunch: Not organised, many small restaurants around
- Computer terminals: See distributed sheets
- Social events: Reception at 18:00 h on Tuesday, dinner at 18:30 h on
Wednesday
- Recreational facilities
- Transportation to Chicago: NWA may be on strike
- Conference assistance: Beth Demkowski, Connie Bridges
- K. Sliwa: Thanks to Homer for having arranged the workshop at UM, given
the difficult situation in Boston. Hopes that the meeting will boost
activities of US Atlas on software and computing
|
Tue
| 09:30
| Topic
| J. Knobloch: Introduction, long-term planning,
workshop agenda
|
References
| Slides: HTML,
PPT
|
Summary
|
- Thanks to Homer and Krzysztof
- Overview over agenda
- Long-term planning: Done for all systems in Atlas, administered
centrally
- Participation of computing felt essential (50...100 milestones)
- Could start from Offline software plan as outlined in CTP
- Ingredients: Milestones given in CTP (OO reconstruction and simulation:
first version by end 99, full chain operational on subset of final
infrastructure: 1 year before LHC startup; coverage of interim needs)
- Classification of milestones: General, Analysis, Event storage,
Simulation, Reconstruction, Infrastructure
- Next steps: Add missing topics, provide more detailed description,
identify people responsible for each milestone, agree on the dates for
the milestones, for milestones due in the next 10 months: define
intermediate milestones, assess resources available and needed
|
Discussion
|
- Milestones need to be defined more clearly, some dates need adjustment
- Some of the milestones do not yet match the schedules of related
projects
- Reasonable to cluster milestones for the next two years
|
Tue
| 10:00
| Topic
| T. Akesson: Review of computing
|
References
| Slides: HTML
|
Summary
|
- Mechanism in place to review systems by Atlas-internal reviewers not
strongly involved in the project
- Recall of main aims of computing project, and of possible topics for
the review
- Review requested by computing coordinator. Timescale: introduced now.
Atlas management to appoint chairperson, and with chairperson the
members. Schedule for report: February 1999
|
Tue
| 10:10
| Topic
| J. Knobloch, for J. Pater, for G. Poulard: Status of
Fortran software
|
References
| Slides: HTML,
gzipped PS
|
Summary
|
- Slug, Atlsim, Atrig: no changes since May
- Dice: Correction on 14 August to avoid too many steps in tracking in
forward region
- Atutil: uses new muon AMDB data base, new steering for calorimeter
unpacking and data preparation, new pileup and calibration for
calorimeters
- Arecon: Use new jet finder algorithm, new precision-layer clustering
(basically the one of iPatRec) working. Calorec: EM shower
identification, new routines for combined reconstruction. xKalman:
interfaced with calorimeter, use standard clusters, C++ version
being tested, still performance problems. PixlRec: Improved
performance. iPatRec: migrated to SRT/cvs
- Muonbox: New pattern recongnition and reconstruction with new AMDB
- Platforms: binaries available on HP, partly on IBM, DUX
|
Discussion
|
- How did the DICE bug affect the production?
- Q: What is the officially recommended version? Can't a version be
frozen and a new one be prepared? A: Problems are related with the
transition from CMZ to cvs/SRT
|
Tue
| 10:45
| Topic
| C. Onions: Cvs migration
|
References
| Slides: HTML,
gzipped PS
|
Summary
|
- Original plan: freeze 97_6 CMZ in June 97; package responsibles to
participate in conversion; ATLSIM not to be converted
- Reality: 97_6 ready end July 97; developers did not get involved in
conversion (except for Atrig and Geant3); decision to postpone
migration; CMZ freeze deadline slipped to Jan 98 (release of 97_12)
- Then decided to go one by one; slug, generators, atlfast, geant3,
atrig done first
- CMZ freeze delayed once more (till 98_2), converted dice, atutil,
atlsim
- iPatRec developed and implemented in cvs
- Now converting last packages (atrecon, jetfinder, muonbox)
- Some problems to be solved ("frozen" atutil had been updated, agetocxx
and agetof incompatibilities)
- To be done: resolve incompatibilities; update makefiles for dice,
atrecon to build library of C++ code generated from Age files;
package developers to update atutil, generators, atlfast, dice; make
more developers use the repository
- CMZ development must stop now!!!
|
Discussion
|
- Q: Is there support for linux? A: Yes, at some level - some packages
are known to work under Linux, but a real support is waiting for
a Linux reference machine to become available at CERN
- Q: Event generators in CMZ are ancient; is there any hope to have them
updated? A: They have been migrated to cvs, and the maintainer
promised to update the copy in cvs, but this is still pending
|
Tue
| 11:05
| Topic
| L. Tuura: Status and plans of SRT
|
Summary
|
- Version 0.3.0 prepared, still working on the manual, tests ongoing,
in particular NT support needs to be well tested
- Most makefiles will not need to be changed
- New features: Work against a local checked-out copy of the
repository (for notebooks and slow network connections); better
support for NT; automatic deduction of libraries to link against
- Many minor improvements, changes, bug fixes...
- With convergence towards a common solution with other experiments in
mind, try and resist to implement new features in future, but restrict
to maintenance
|
Discussion
|
- Q: Is there an intention to share SRT with CMS? A: Yes, CMS is
interested, discussion is going on
|
Tue
| 11:15
| Topic
| H. Meinhard, for M. Stavrianakou: Testing
|
References
| Slides: HTML,
PPT
|
Summary
|
- No news concerning testing in the ASP
- Initiative involving experiments and CERN-IPT to survey existing
and planned activities
- Other experiments likely to assign low priority to testing as a Spider
project, but we feel it is very important
- Company identified by IPT offering good training courses on testing,
one at the
component
level (best suited for developers), and one on the
system
level (best suited for coordinators); see
the company's Web
page for more details. If interested, write E-mail to
Maya Stavrianakou.
|
Discussion
|
- Q: Do other experiments follow these courses? A: Not known
- D0 is doing testing at the component level, authors are responsible,
mostly automatized regression testing. Not much done concerning
system testing
- We should try and rise the importance of testing in the Spider
project
|
Tue
| 11:25
| Topic
| S. Fisher: ASP matters, Reviews
|
References
| Slides: HTML
|
Summary
|
- Recap of the main ASP statements
- ASP now maintained in SGML (DocBook 3 DTD)
- Simplification underway
- Changes to C++ rules: New section added on namespaces etc.
- External software: Rules to be tried with CLHEP
- Reviews: Cycle time too short to go through requirements, design and
code completely, hence by the time of the design the requirements are
out of date etc.
- Solution: Review one document with requirements, design and sample
code, requires mechanism to mark differences of current and previous
documents
- ASP itself needs to be reviewed once the changes are all in
- Non-immediate plans for deliverables: Specify deliverables by a DTD
|
Discussion
|
- Q: Why not use HTML rather than SGML? A: HTML is not powerful enough
- Should still allow for separate reviews of requirements / design /
code as before
|
Tue
| 11:40
| Topic
| J. Knobloch: General ASP matters
|
Summary
|
- ASP now foresees domains, each one with a domain architect, who
focuses on design questions. Managerial aspect within the
domain not covered yet
- Introduce another role: Domain coordinator, to cover the managerial
aspect
- Would probably help distributed development
|
Discussion
|
- Q: What kind of authority would a domain coordinator need in terms of
manpower?
|
Wed
| 09:15
| Topic
| H. Meinhard: DIG decisions
|
References
| Slides: HTML,
PPT
|
Summary
|
- Global design: Jürgen's DFDs discussed. Traditional OO modelling
techniques found not to be well suited for top-level design of large
systems, but object flow diagrams (much like data flow diagrams of
SA/SD) seem to do better. Tasks evolving: Define proper semantics;
complete collection of OFD, review; define work packages
- Object browser: Facility using C++ RTTI to get read (and possibly
write) access to objects in the system which enable RTTI. Classes need
to inherit from one trivial abstract base class, name to be defined
- Design of Tracks, Clusters, and Hits class: Need to go ahead.
Requirements have been collected, track class expected long before
next DIG meeting
- Milestones, planning: Jürgen's list discussed, procedure for
refinements found correct. Some discussion about deadline for decision
on DB vendor by 6/2001, but we'll keep this date for now
- Spider project: Atlas opinions for priorities: High: Testing, SRT
consolidation, documentation (Light 2). Medium: Architecture, design.
Low: Requirements, analysis, programming, verification and validation,
project management
- Reviews: Steve's proposal for amendment to the ASP (possibility to
review requirements, design, and code samples together) approved, but
old procedure kept too. Muon code review still not ready to start, but
Graphics control code review to be launched immediately. Upcoming:
SRT, iPatRec; awaited: Event, DetectorDescription, TruthEvent
- Compilers, use of C++ etc: Recommend to use egcs-1.0 on Linux for the
time being. Use the prefix std:: in the header files for elements of
the standard library, and use CxxFeatures in order to have SRT mask it
out on non-conforming machines. Coding rules will be reviewed only
after a few code reviews, and changes will be lumped together. Time now
for thinking about exception handling; Lassi's proposal to be sent to
atlas-sw-developers. Java: no need for change of long-term strategy,
ask one of the Java submitters to provide a proposal how to integrate
Java with our tools (SRT etc).
- ASP: Proposal of having two representatives per domain generally
accepted, but some discussion on the detail to which the roles should
be described in ASP
|
Discussion
|
- UML may address the problem of the overall design of a large software
system
- Q: How frequently are we going to re-consider our attitude to Java for
example? A: Should not do that too often, probably once every 1 or 2
years is a reasonable compromise
|
Wed
| 09:30
| Topic
| J. Knobloch: Overall design - an attempt
|
References
| Slides: HTML,
PPT
|
Summary
|
- Software community well accustomed to DFD (data flow diagrams)
- Key elements are actions, data stores, and data flows
- Shows dependencies and clarifies which actions must be available early
(eg. detector description)
- Can be used to define work packages
- Probably not going to be maintained for very long, and one should not
go into much higher a level of detail
- Muon and ID reconstruction diagram discussed with system experts
- Should try and do similar things for simulation and calorimetry
reconstruction
- Currently using a simple drawing tool, not linked with StP
|
Discussion
|
- Q: Is there an automatic consistency check for the diagrams? A: Not
with this tool
|
Wed
| 10.30
| Topic
| H. Meinhard: OO reconstruction
|
References
| Slides: HTML,
PPT
|
Summary
|
- Regular meetings, quite some success now to get people migrated to OO,
but strong boundary conditions (eg. physics TDR) require some work on
existing Fortran code, too
- Packages: iPatRec (60% OO) included in repository, design document
almost ready. xKalman: C++ version exists, being tested and changed to
handle non-uniform B field; performance issues to be solved. Work on
common OO clustering for ID packages started. TRT_Rec works in Arve,
iPatRec to be expected soon. OO B field implementation exists.
|
Wed
| 10:40
| Topic
| R. Clifft: iPatRec design
|
References
| Slides: HTML,
gzipped PS
|
Summary
|
- iPatRec started entirely in Fortran, moved to 60% OO C++ while being
maintained
- Performance issues addressed in OO part (input, track finding -
combinatorials)
- Requirements included in design document, all maintained in StP
- Design document not quite ready, requirements could be more
complete
- iPatRec reconstruction OIG discussed
- BuildDetector OIG
- PrimaryTrackFinder OIG
- 'Final' design document expected in about 2 weeks, but beware of
constant changes to the code
- iPatRec Web page now exists, will be publicised shortly
|
Discussion
|
- Q: Does iPatRec work with a non-uniform field? A: Being worked on,
simulated data to test are now available
- Q: How does the performance compare with Fortran? A: Quite some
speedups achieved by avoiding re-computations, and using dynamic
storage
- Q: What about the Track class used in iPatRec? Q: Existing services
described in design document
|
Wed
| 11:15
| Topic
| RD Schaffer: OO projects in test beams
|
References
| Slides: HTML,
PPT
|
Summary
|
- Now using Objectivity for this year's tile cal test beam
- Moving existing C-Zebra based event code to Objectivity V5 within
six weeks by two people, without re-designing the event model
- ~ 100 GB of data written in parallel with existing system
- Reconstruction under development now
- Calibration system redesigned, using an hierarchical model (visitor
pattern)
- Other groups expected to use Objy for next year's test beams
- Interested to try out Babar's Conditions Database
|
Wed
| 11:20
| Topic
| A. dell'Acqua, for M. Stavrianakou: Analysis in OO
|
References
| Slides: HTML,
gzipped PS
|
Summary
|
- LHC++: Version 98A released on 28/07 for DUX, HP, Solaris and NT
- LHC++ workshop to be held end September, with heavy involvements of
experiments
- LHC++ is preparing a status report to the LCB
- Ongoing activities in Atlas: S. Resconi and A. dell'Acqua evaluating
physics analysis in LHC++; M. Stavrianakou porting user analysis
applications; S. Rinolfi developing Iris Explorer modules to print
histograms from selected container
- Plan to migrate combined reconstruction N-tuple to Objy, port of
Atlfast++ to Objy and LHC++
|
Discussion
|
- Q: what's the status of LHC++ on Linux? A: See web page
- Q: Do the projects follow the ASP? A: Not for the evaluation phase,
to come later if evaluation successful
|
Wed
| 11:25
| Topic
| J. Hrivnac: Event display
|
References
| Slides: HTML,
gzipped PS
|
Summary
|
- Design dictated by requirement to be able to use any external
visualisation (histogram, event display, Ascii file, ...). Extensively
documented on the Graphics Web
- AHisto introduced as abstract scene, uses LHC++ and Objy. LHC++ to
be better integrated into SRT
- EventDisplay/Aravis: being migrated to standard graphics interface
(Rosemary and David Candlin)
- EventDisplay/Atlantis: Introducing new algorithms, menu structure
introduced. Will prepare manual, tutorial, implement XML parser
- EventDisplay/Wired: Summer student working
- Misc/XMLScene: Standard format for textual event representation for
Atlas Graphics, recommend to adapt it for Atlas General. Simple
implementation exists, file tree to be implemented, DTD to be
defined
- Misc/JavaScene: Interface to C++ exists, needs to be integrated
- Control: Prepared for code review. Need to be extended to satisfy more
requirements
- ObjectInterface: ObjectBrowser prototype exists, needs consolidation
with graphics
- Utility/CreatePlottableModel: Automatises creation of skeleton of
classes (a la MS wizards)
- CrossDomain:InDetGraphics, TruthEventGraphics exist for Si and TRT
(detector, digit), TruthTrack
- Application/GraphicsFromEvent: Web application to visualise any event
on any Atlas tape, some remaining technical problems
|
Wed
| 11:40
| Topic
| A. dell'Acqua, for M. Stavrianakou: Status of MC production
|
References
| Slides: HTML,
gzipped PS
|
Summary
|
- Top studies: ttbar inclusive running on PDSF at LBL, ttbar for heavy
top: running on PCSF at CERN, many more to organise. Capacity available
on PCSF. Coordinating person required (optimally based at CERN) taking
the load off Maya
- Z to e+e-: In progress on PCSF (can no longer run on csf)
- B tagging events being run in Krakow
- Contributions from Glasgow and from Atlas WGS at CERN
- 17 GeV jets: filtering still to be done (volunteers needed)
- 35 GeV jets: Run successfully on PCSF (S. O'Neale)
- Higgs to gamma gamma: to be run on BASTA
- More specialised productions in the pipeline
- Single particles in non-uniform magnetic field done
- B physics going on in several places, confusion about muon geometry
- Production going on well, mainly due to good PCSF performance, but more
manpower needed
- Outside productions sometimes problematic due to different or missing
infrastructure
- csf essentially no longer being used
- Production meeting to be scheduled during Atlas week
|
Discussion
|
- Q: What is the recommended exchange medium? A: DLT
|
Thu
| 09:00
| Topic
| S. Loken: Progress towards a National Collaboratory
|
References
| Slides: HTML,
PPT
|
Summary
|
- DOE program towards collaborative experiments and computation
- Components: Toolkit of advanced computational testing and simulation,
collaboratory technology research and development, collaboratory pilot
projects, computational grand challenges
- Collaboratory R&D projects: Shared virtual reality, software
infrastructure, collaboration management, security infrastructure,
electronic notebooks, floor control, quality of service
- Videoconferencing: Work going on at LBL on MBone tools; new conference
controller enables remote control of conference tools and cameras;
other tools (NetMeeting, PictureTalk, Streaming JPEG - mostly platform
dependent)
- Floor management: Provide floor control and mediation for MBone
conferencing tools, plug into existing protocol support
- Information: DOE2000.lbl.gov/doe2k,
www.microsoft.com/NetMeeting,
www.es.net,
www.picturetalk.com
- Integration framework: destributed computing architecture with common
communication library
- Objectives: Facilitate development and interoperability of
collaboratory components (convenient access to unicast and multicast
messaging, common communication API, reliable multicast, CORBA
communication). See
www.mcs.anl.gov/cif
- Security: Develop widely distributed applications and access rights
management; foster the development of a DOE laboratory public-key
infrastructure; integration into various application domains
- Quality of service: deploy differential services on selected ESnet
links, by implementing a bandwidth broker, linked to authentication
architecture
- Recent accomplishments: bandwidth broker with authentification,
demonstrated capability on ANL-LBNL link
- Electronic notebooks: Modifiable by many users, coupled to flat files
or (relational or OO) data bases. Interest: Mechanism for managing
distributed projects; more general than simple Web server, links with
cvs being developed. See
www.epm.ornl.gov/enote/.
Used in Diesel combustion project
- Well received by participating companies
- More info: www.mcs.anl.gov/DOE2000
|
Discussion
|
- Q: Are there copyright issues for distributed teaching? A: For the
most part, no. Servers being offered commercially, clients are free
- Q: What is the scientific template library? A: Very much oriented
towards specific problem, link facilities to be added later
|
Thu
| 09:45
| Topic
| D. Atkins, T. Finholt, J. Hardin: Collaboratory tools
at University of Michigan
|
References
| Slides presented by D. Atkins: HTML,
PPT
Slides presented by T. Finholt and J. Hardin: HTML,
PPT
|
Summary
|
- (D. Atkins) UM's School of Information founded three years ago
- Challenges: Communication and collaboration; information resources;
physical world
- Research projects: Collaboratories (UARC -> SPARC, bio-medical, HEP,
commercial product design); Digital library projects; research in
community settings; electronic commerce
- Collaboratory concept: collaboration at same/different time/space
- Organisations using it can be big and small, local and global, ...
- Examples: UARC facilities: supporting hundreds of instruments around
the world
- SPARC project: Real time campains, Electronic workshops, Education,
Integrated models, Space weather testbed
- Interdisciplinary, human-centered approach
- International aspects: Collaboration with EU in the field of digital
libraries
- JSTOR: 25 major academic journals archived, see
www.jstor.org
- Research driven by applications
- Very interested in long-term coordination with Atlas collaboratory
effort
- (T. Finholt) Electronic resources represent excellent opportunities,
but gains aren't automatic
- Difference between hype, raw performance of technology, and real
performance ("reality gap")
- Need to bring together users and technology experts
- SPARC: Globally distributed instruments and scientists; concurrent
access to measurements of remote stations; virtual discussion room
- Past: limits in time and distance; today: diverse expertise, students
are in computer-mediated environment
- Past: focus on views from instruments (local data); today: simultaneous
views on instruments and models
- (J. Hardin) Custom-coded solutions, commercial off-the-shelf solutions
- Shared application (Habanero) built on top of Wired (3 weeks part time
modification)
- Shared application: Atlas MiniDAQ, control can be passed from one
instance to another
- NetMeeting does not require applications to be installed everywhere
(Habanero does)
- Examples: shared (via NetMeeting) work on PowerPoint file, FrameMaker
document
- UM School of Information involved with long-term research as well as
today's applications
|
Discussion
|
- Q: What bandwidth do these tools need? A: 10 kB for the events in
Habanero, different profile than with NetMeeting
- Q: What is the effort involved in setting up a NetMeeting session? A:
Installation is straightforward for Windows machines, more complicated
for Unix machines. Dial-ups very fast
- Q: What about audioconferencing? More than two participants require
additional commercial products, but audio phone call is probably more
efficient
- Q: Can one, with NetMeeting, take over the screen of somebody? A: No.
Other products offer this possibility, though
- Q: To which extent can the virtual rooms be integrated? A: Focus is a
little different
- Q: Have you looked at Java based applications? A: Indeed very many
applications around
- Q: What about interoperability of the different tools? A: Need to be
looked at for every component (audio, video, ...), some tools only
implement subsets of standards
|
Thu
| 11:15
| Topic
| C. Onions: Videoconferencing at CERN
|
References
| Slides: HTML,
gzipped PS
|
Summary
|
- Current status at CERN: 2 rooms equipped with Codec systems (1 public,
1 CMS); 2 rooms equipped with packet video (1 CMS, 1 Atlas); 1
auditorium with MBone
- Ongoing: Weekly software meeting transmitted via virtual room, control
group same thing, plenary meetings transmitted to MBone, person to
person discussions via VR. Subgroups also use CUSeeMe, NetMeeting,
Codec
- LCB videoconferencing
project:
aims to improve video conference facilities at CERN; tasks identified:
Packet system, Codec-packet system gateway, Codec kits for PCs,
Codec MCU services, equip extra rooms, ISDN for packet video,
recording and playback
- Status of Atlas room: Installation now, early October: operational
demonstrator. Production services need identification of a person
to be the technical support; simplification of booking; preparation of
users' guide
|
Discussion
|
- Q: What's the price of all this? What is required in outside institutes
to take part? A: Audio and slides are most important, telephone
conference better solution for the time being. Equipment for the room
is roughly 50 kCHF
- Whole report on the Web, following it managed to equip a room for
10...15 people for 11 kUSD just fine
- Q: To what extent have sound and light experts been involved at CERN?
Found indispensible in other labs. A: Zero
- Q: Where does the asymmetry of CMS and Atlas come from? A: CMS has
got a dedicated person, and US-CMS has invested at CERN for VC
- Q: Is there a plan to extend the scope of the LCB project to other
collaborative tools? A: Probably a new project is more appropriate
|
Thu
| 11:30
| Topic
| RD Schaffer: Event domain, database, detector
description
|
References
| Slides: HTML,
PPT
|
Summary
|
- DB work concerns Offline and test beams; domains: Detector description,
event, alignment, calibration; talking about data base does not only
deal with Objectivity, but also the design of all persistent and
non-persistent interface classes
- Detector description: Single source for all applications, with
connections to CAD systems anticipated. Work started with an existing
data base for the muon system (Ascii file), initially feeding Fortran
commons and Zebra banks. Preserved functionality in moving to an OO
design
- Vision: Transient object model as central piece, connects to
Ascii files, persistent model, Fortran commons, application specific
models (reconstruction, simulation...). Use visitor pattern for
transformation
- Identifiers: Access to information via hierarchical string identifiers
according to logical detector structure. Natural means of mapping
detector-related information together (links naturally between event
raw data and detector description / calibration / alignment). Also
provide means of selecting information of interest. Status and plans:
with 0.0.8, MDT description is available (see README). Need volunteers
for other detectors. Also need to implement persistency for
DetectorDescription
- Informal cross-experiment collaboration begun on detector description
DB
- Event domain: Primary effort focused so far on design and
implementation of access to G3 raw data digits. Separation of transient
and persistent pieces a la Babar. Digits available for most of the
detectors, storage in Objy
- Basic design of raw data structure: Detector elements and digits are
identifiable, detector elements provide enough information to fully
decode their digits, digits are directly usable by application
programs
- User interface to raw data: organised hierarchically. Collector
objects return either DetectorElements or just Digits. Selections
possible
- Transient vs persistent: BaBar has chosen to isolate application
programmer from data base details (only see transient classes);
parallel persistent 'copy'. Question not yet fully addressed in Atlas
- Separation means that we can have multiple persistent stores (Zebra and
Objectivity). In the end, Level 3 should use the same interface to
raw data as Offline
- Status: Now beginning to attack the overall event design, with similar
ideas as BaBar. Questions: To which extent does the user see the
persistent classes? How to clients select/identify/navigate to
objects in events?
- Try and avoid schema evolution of basic event model
- Digits for pixels, SCT, TRT, and MDT are now available, TGC and RPC
will hopefully be done by Patrick Hendriks and Steve Goldfarb. Calo
remains to be done (most likely via lightweight pattern). Digits for
pixels, SCT, TRT can now be stored in Objy, can be exchanged with
Zebra. However, corresponding detector description not yet stored
in Objy
- Infrastructure: Two Objy servers available, 1 V4 with 100 GB, 1 V5 with
100 GB. Objy V5 still available for Solaris and NT only. Need to define
a working model for developers
- People needed: Plenty of room! Within existing efforts on
DetectorDescription and Event, new efforts to be started on
Calibration and Alignment
|
Discussion
|
- Q: Are the logical identifiers accepted Atlas-wide, or is it just a
proposal? A: Not yet discussed with all concerned subgroups
- Q: Isn't it foreseen to write Level 2 output into Objectivity? A: No,
that is not possible with current bandwidth, only possible after Level
3
- For debugging purposes, useful to be able to make any class persistent
- Q: When is this work ready to be reviewed? A: Will try for next DIG
meeting
- Q: If one can use Zebra and Objectivity interchangeably, can one
justify the choice of OO database? A: Sum of all requirements, which
are still considered valid, call for an OO data base
- For a review of the Event model, one must be prepared for several
iterations
|
Thu
| 12:15
| Topic
| J. Knobloch: 1 TB milestone
|
References
| Slides: HTML,
PPT
|
Summary
|
- Fairly vague formulation, in principle already reached by RD45
- May workshop: Objy server, event store in Objy, tile testbeam,
HPSS interface under study. Our milestone not achievable with disks
only, but HPSS required. Need to define measures
- Proposal: Use tile test beam and GEANT3 data for milestone, need
10% disk and 90% tape via HPSS (more work to be done by RD45), provide
paper on benchmarks (space overhead, CPU consumption, R/W transfer
speed, assess added value of direct access and data structuring,
address question of physics analysis tools). Comparison with Zebra
files
|
Fri
| 09:00
| Topic
| K. Sliwa: LCB project "Data Management and Computing
Using Distributed Architectures"
|
References
| N/A
|
Summary
|
- Monarc stands for Models of Networked Analysis at Regional Centres;
official project title: Data Management and Computing Using
Distributed Architectures
- Recently, Japanese collaborators joined
- Joint effort of Atlas and CMS
- History: first ideas in fall 1997, PAP preparation in spring and
summer 1998, meetings at CERN, Chicago, Lyon(?)
- Basic assumption: raw data and first processing at CERN, results made
available distributedly
- Central question to be addressed: Which models of distributed analysis
are feasible, according to network capacity and data-handling resources
likely to be available? Which parameters characterize these models?
- Main objectives: identify crucial parameters of models, and collect
informations about them; develop simulation and modelling tools for
these models; determine the necessary infrastructure
- Phases: summer 1999: first-round set of modelling tools; fall
1999: refined set of modelling tools, guidelines for constructing
feasible computing models; 2001: prototype design and test
implementation of computing models
- Person-power available: more than 9.5 FTE so far. Request to CERN for
one full-time modelling person, and 140 kCHF
- Tasks: identify, run, and validate simulation tool; non-network
hardware, network hardware, OODBMS implementations, centre
architectures and resources; design analysis processes: user
requirements, analyse existing models, define metrics; implement
testbeds: define scope, tasks and configuration;
extensive documentation
- More info:
the
PAP (Project assignment plan),
Soda (Christoph von
Praun's simulation tool),
Monarc mailing list
(use Mowgli to
subscribe)
- Main schedule: know where we are in summer 1999
|
Discussion
|
- Q: How can we derive results for the 1999 computing status report?
A: Fits well with the aim of having first insights by summer 1999.
Resurrection of the computing model group could be useful
- Q: To which extent is there a danger that we just prove what we
want to prove? A: Setout of the project is targeted to objectively
explore the full phase space
- Q: In the 1999 status report, will we make a statement about regional
centres to be built in the US and in Japan? A: Perhaps not the
precise number, and not the relative sizes yet, but in general yes
- These parameters need to be known soon, in order to ask the funding
agencies for the resources required for regional centres
|
Fri
| 10:25
| Topic
| A. dell'Acqua: Report about units
|
References
| Slides: HTML,
PPT
|
Summary
|
- Geant4 has got its own internal system of units, but end user flexible
to choose; hence conversion system needed
- Coherent system defined by set of four basic units. Chosen in G4:
millimeter, nanosecond, MeV, positron charge. Header files prepared
for conversions, abbreviations, ...
- Very expressive and self-documented, flexible
- Explicit specification of units, and the conversion, can be slow;
however, in well written code, the explicit use of constant parameters
very limited
- Can choose the system of units which best suits the application
(eg. MKSA)
|
Discussion
|
- Q: What about debugging when you just look at numbers? A: Then you
get the numbers in your internal units
- Potential disadvantages of different internal units of G4 and other
software: conversion effort, re-use of G4 components
- Andrea and Katsuya to come up with a proposal, and to send it to the
community
|
Fri
| 10:55
| Topic
| T. Sasaki: Status of Geant4
|
References
| Slides: HTML,
PPT
|
Summary
|
- Extensive
Documentation
on the Web
- Beta-01 released on July 23, 600 kLOC, 1.2 k classes
- Geometry and tracking optimised, physics processes to be optimised
later.
Physics processes extended with respect to alpha versions, persistence,
independent GUI and visualisation, fast parametrisation framework
included. Step interface under test
- GGE: Geant4 geometry editor, allows for a setup of the geometry without
C++
- DAVID (DAwn's Visual Intersection Debugger), helps finding
inconsistencies in geometry description
- Set of examples provided as well as introduction to G4, installation
guide, application developer's guide
- Release plans: end August beta-02, mid November beta-03, mid
December Geant 4.0
|
Discussion
|
- Q: Are the design documents available? A: Yes, on the Web
|
Fri
| 11:05
| Topic
| A. dell'Acqua: Organisation and work plan for Atlas-G4
working group
|
References
| Slides: HTML,
PPT
|
Summary
|
- What's a simulation program? Usually, user sees the description of
his subdetector, but general framework is more important and
difficult
- Subdetector activity: detector geometry, detector response, hits and
digits, testbeam simulation
- Kinematics and generators: particle guns, interface to generators,
particle stacking, filtering, minimum bias overlaying, interaction
with event WG, database WG, physics WG
- Detector description: See RD's talk. Detector description database,
its interface to simulation, common tools for materials, volumes,
readout, common WG of LHC experiments, CAD-CAM interface
- Hits, digits, and persistency: Common tools for hit and digit
definition, efficiency, data organisation and volume, persistency
scheme. Interaction with event WG, database WG
- Physics: EM and HAD physics re-written and extended, new tracking,
new cuts, new geometry. Need serious plan for the evaluation of G4
physics; requirements needed. Long way to go to understand the MC
- Parametrized simulation: New field; fast MC, sub-detector
parametrisation requirements? Shower parametrisation and shower
libraries?
- General infrastructure: Arve, how to fit in component model, how to
make things co-exist
- Other activities: Field, graphics, background calculations, extensions
to G4
- Atlas-G4 core group: Propose to set up a small (less than 6) group of
people from Atlas G4 team, with OO experience, to come up with a
precise decomposition of the simulation part. Volunteers exist,
workshop in October
- Why categories? Because it facilitates distributed development if
problem is factorised correctly, with clear boundaries of
responsabilities. Don't want to have a Cern-centric effort
- See the
Geant4
Web page. Compiled HP version of Geant4 beta01 available
on /afs/cern.ch/atlas/project/geant4/beta01. Atlas Geant4 mailing
list exists, can be opened and revived. Set up an Atlas G4 Web page?
More tutorials? WG from next software week on?
- People needed to evaluate the toolkit, to gain experience, to work on
current testbeams, to participate in analysis/design
- Contact Makoto or
Andrea if interested
|
Discussion
|
- Q: Is it true that Geant4 allows for mixing of detailed and fast
simulation? A: Yes. All volumes in a given envelope (which is a volume
itself) will run the fast simulation
- Q: What is the time scale for a subdetector implementation? A: Geometry
can probably be done within a couple of days by conversion of G3
geometry, but this is not what we want eventually
- Web page is a good idea
|
Sat
| 09:00
| Topic
| J. Knobloch: Operating systems
|
References
| Slides: HTML,
PPT
|
Summary
|
- Systems currently supported for Atlas software: HP-UX, AIX, DUX;
Linux upcoming; supported outside CERN: Irix, Solaris(?)
- Decision taken already: in future, will support both NT and Linux on
PCs, plus one additional commercial Unix operating system. No preference
for HP-UX
- Criteria: availability of commercial software, commonality with other
experiments and projects, price of hardware, outlook for the future,
quality of compilers and development environment, support within Atlas
- HP-UX: we had it. DUX: what will Compaq do? Good compiler. AIX: little
response in Atlas, good environment. IRIX: very little Atlas response.
Solaris: wide spread use, Objectivity early release, CMS uses it
- Matrix of OS shows some advantages of Solaris
- Conclusions: nobody advocates HP, about equal interest in Solaris and
DUX, limited interest in AIX, IRIX is a minority issue
- Can we propose to go for Solaris? Do we need a decision by the next
Atlas week? (Cocotime decision for 1999, who will win on Merced, there
will always be the wrong time for decision). Transition strategy needs
to be established
|
Discussion
|
- Another potential plus of Solaris is its existance on Intel
- Have to consider 64 bit compliance
- Q: What's the function of the server? A: Reference platform for
program development, login service for people without other access
to this O/S
- How to proceed? Needs to involve the whole collaboration (talks at
forthcoming Atlas week). Collaboration will be informed via
computing coordinator report
- Q: Currently, no reference machine available to us at CERN. A: Yes,
but a Solaris WGS would be one of the first steps
|
Sat
| 09:40
| Topic
| H. Meinhard: Workshop summary and decisions
|
References
| Slides: HTML,
PPT
|
Summary
|
- Summary of workshop and DIG presentations and decisions as recorded
in these minutes
|
Sat
| 10:25
| Topic
| J. Knobloch: Planning until July 1st 1998
|
References
| Slides: HTML,
PPT
|
Summary
|
- General: Install test and verification procedure by 4/99 (M.
Stavrianakou); Assess development resources by 4/99 (J. Knobloch)
- Analysis: collect analysis requirements by 11/98 (D. Froidevaux, K.
Sliwa); evaluate analysis environment for interim period by 3/99
- Infrastructure: 1 TB database prototype by 12/98 (RD Schaffer);
Raw event data structure
- Simulation: domains defined by 10/98 (A. dell'Acqua, M. Asai);
Geant4 test beam simulation by 3/99; define event generator interface
by 1/99; simulation framework available by 1/99; single particles from
Geant4 in Arve by 1/99 (A. dell'Acqua)
- Reconstruction: geometry available by 3/99; prepare data by 12/98;
find ID tracks by 2/99 (J. Pater, A. Poppleton); find muon tracks by
4/99 (G. Stavropoulos); find calorimeter clusters by 5/99; calibration
of calorimeter by 4/99
- Event display prototype by 7/99 (J. Hrivnac)
- New milestones: Provide outline of 99 status report by 2/99 (J.
Knobloch); report from review of computing by 2/99; data challenge by
7/03
|
Discussion
|
- Extremely important to have a good event display working for PR
purposes
- Data challenge ought to be coupled to full system test (now targeted
at 2004), smaller data challenges at earlier points in time could be
useful
|
Sat
| 10:40
| Topic
| H. Meinhard: Dates of meetings in 1999
|
Summary
|
- Atlas weeks likely to be in weeks 8 (22-26 Feb), 23 (7-11 June),
37 (13-17 September)
- Software workshops: weeks 11 (15-19 Mar), 20 (17-21 May),
35 (30 August - 3 September), 48 (29 November - 3 December)
|
Sat
| 10:55
| Topic
| J. Knobloch: Closing remarks
|
References
| Slides: HTML,
PPT
|
Summary
|
- Local organisation: very well done. Thanks to Homer, Krzysztof, Beth,
Connie
- Thanks to speakers, program organisers, ...
|
Discussion
|
- H. Neal: Pleasure to host the workshop
|