ATLAS Software Workshop Minutes

CERN, August 30 - September 03, 1999

Day Time    
Mon 10:00 Topic J. Deacon: Tutorial on Unified Modelling Language
References Recording (see instructions)
Summary
  • Methods known previously: Booch, Coad-Yourdon, Fusion, Martin-Odell, OMT, ...; most popular methods and notations were OMT, Booch. OMT more appealing to business communities, Booch to scientific ones
  • Booch trying to unify methods and notations
  • UML close to OMT, version 0.8 in October 1995. Jacobsen joined in autumn 1995. Brought in strong OO background, introduced use cases and object interaction modelling. UML being renamed the Unified Modelling Language
  • OMG issued a Request for Proposal in 1996 for UML
  • Versions 1.2 and very recently 1.3 ready for ratification by OMG
  • Goals: expressive visual modelling language, enable modelling of systems using OO concepts with a limited number of graphical symbols, independent of implementation programming language, rigorously described by metamodel
  • Composition and organisation: Use case diagram, class diagram (although care must be taken), statechart and activity diagrams (not for beginners), interaction diagram (sequence or collaboration), implementation diagrams (component, deployment)
  • Diagrams and icons: few basic icons with variations if necessary. Own entities can be added
  • Use case diagram: representing the functionality of a system, or a class, as manifested to external interactors with the system (Ivar Jacobsen). Usually more important for the former. Good number: 6...36. Mainly to be used in the phase of the requirements capture, some people think they can be used in the analysis or design phase as well. Diagram format not necessarily better than text format. Relationships in Use Case Diagrams now (version 1.3) represented differently
  • "Class" diagram: static structural model of the kinds of objects that comprise the subject matter, intended to comprise the design. Can show also attributes and relationships, operations, methods. Can represent packaging/grouping as well. Concrete, implementation, object class example shows variables, member functions (public, protected, private) as well as relationships (associations, aggregations etc). Association relationships show roles, multiplicity, navigability. Important for aggregation: lifetime of instances (not classes). Generalisation symbol used in analysis and design, danger for confusion. Other things that can be depicted: instances, dependencies, interfaces, metaclasses, ...
  • State chart diagram: depicts values of intrinsic properties, not often done to the end. Casting into a hierarchy usually necessary because of multiplicity of states in typical projects. Might be attached to a concrete object class, a type, or an entity. Depicted by soft boxes, including allowed paths from one state to another
  • Activity diagram: Mixture of state charts, Petri nets and flow charts, focuses on procedures, not very OO. Interaction diagram normally preferred, but activity diagram well suited for visualising complex, distributed algorithms. Not currently supported by case tools
  • Sequence diagram: emphasis on responsibility and collaboration in time, provides decision symbol as well - danger to go back to flow charts. Most useful for key examples of instances. Can include return receivers, asynchroneous messages, threads, object creation and destruction, real-time constraints, ...
  • Collaboration diagram: shows structure of instances sending messages, no axes on paper, hence messages need to be numbered
  • Component diagram: essentially equivalent to visual representation of makefiles
  • Deployment diagram: run-time deployment of objects, components, nodes from a physical point of view
Discussion
  • Q: What's the difference between 'class oriented' and 'object oriented'? A: Not in all languages, there are just classes (eg. interfaces in Java); also, in C++, there are concrete and abstract classes. Thinking in terms of objects (instances) is important
  • Q: Isn't it useful to use the additional dimension that a diagram offers over the linear text flow? A: In some specific cases, yes - but in most cases Use Cases are just linear, hence text matches well
  • Q: What sort of scalability support was added to UML by OMG? A: Essentially, these are promises for later releases, for example for packaging of large-scale projects, version control, configuration management
  • Q: Is there anything new in the method? Which diagrams should actually be used? A: Books on specifications, user guide, process; not very exciting, not much new as compared to earlier books by Jacobsen. Of course, some dependency on the problem
Tue 09:00 Topic N. McCubbin: Introduction, workshop agenda
References N/A
Summary
  • Warm welcome to all attendants, in particular those who attend for the first time
  • Meetings on Monday and Friday: Intended to establish contacts with system software coordinators and task leaders, overall coordinators etc.
  • Farewell drink for Jürgen Knobloch: Wed 16:30 h, 40-R-D10
  • Clash between reconstruction and graphics working group resolved
  • Discussion on Friday about dates and venues of next year's workshop, should also address structure etc.
Tue 09:10 Topic N. McCubbin: Organisation of Atlas Computing
References Slides: HTML, PS, PDF, TIFF
Summary
  • Set-up of organisation: names of system coordinators and task leaders, chairs. Two-dimensional matrix organised along systems, and along tasks
  • Still some gaps in the picture: eg. graphics
  • Not shown: architecture task force (extra session on Wednesday), limited in time
  • Computing Steering Group: will meet next week for the first time. Executive body of Atlas computing, will make high-level decisions and report to EB. Of course, common sense will apply...
  • Computing Oversight Board: Peter Jenni, Torsten Åkesson, Fabiola Gianotti, Norman McCubbin. Really meant to be an oversight body, should not normally make any executive decision without consultation with CSG or other bodies. Initially, some decisions have been taken by COB, though, but this should be taken over by the Computing Coordinator and/or the CSG with time. Appointments will be discussed in COB as well
  • National Board: Nominations now in place, first meeting in two weeks' time, will elect its chair person. Will discuss topics such as tape charging, reduction of platform support at CERN, of course the Regional Centres question (extra subcommittee of National Board)
  • (Low-level) technical issues now not properly covered in present organigram; some well defined mechanism required. Perhaps some issues should be dealt with by ad-hoc groups, triggering could be done in the weekly software meeting. Recommendations highly welcome
  • Architecture after ATF: ATF will not have finished all the architectural work by end October, hence something has to follow up. Clear that a group of people will be formed, but composition, name, mandate etc. not yet entirely clear
  • Planning is a very important step in defining our way to deliver the software systems for Atlas - in that respect, it must be treated like any other system. Planning means goals, timescales, resources, milestones, ... Clearly, there will be dependencies that will require an iterative procedure. Needed for ourselves, but also for discussions with the rest of Atlas (eg. on resources) - will not be sufficient, but is necessary in any case. Also, there are external forces: The LHCC has asked for a status report as an update of the Computing Technical Proposal, the details of which still need to be discussed with the referees. Also, there will be a review of LHC computing driven by the CERN management (Hans Hoffmann, Roger Cashmore); scope not entirely clear, but it will assess state of readiness of both the experiments and CERN's IT division; the aim is to be able to plan computing resources, and to get to MOUs
  • It is our aim to push for as much commonality as possible between the various reviews to come up
Discussion
  • Q: Recently, important decisions have been taken in COB. Is that going to be maintained? A: No, see above. T. Åkesson supporting this point of view
  • Q: Does the COB have sort of a veto right against decisions of the CSG, as is the case for the spokesperson in other areas? A: It is hoped that this will never be an actual problem
  • The whole area of the core software is essentially missing from the organigram, as are tools, support, running jobs, ...
  • Q: Will the ATF follow-up again be secret? In many cases, ATF has discussed work without the people who did the work. A: ATF was never intended to be secret, but sometimes non-open meetings are needed. As much openness as possible will be aimed for for the successor body
  • ATF successor could not only implement the architecture, but should also make sure the implementation of the systems is compliant with the architecture, even down to very low-level technical questions about the physical implementation
  • Q: Could you comment on how you see the role of the outside groups? A: The necessity of planning also comes from the aim to break the project down into work packages. The planning must not at all be restricted to work to be done at CERN
Tue 09:55 Topic F. Gianotti: Software requirements of physics groups
References Slides: HTML, PS, PDF, PPT
Summary
  • Apologies for not attending the workshop for the rest of the week (CERN school of physics)
  • Physics groups: detector performance, combined performance, physics
  • Physics requirements: varying broadly across the groups, but there are three basic requirements
  • 1. Use past experience. There are 10 years of experience with the detector software around. This requires that some transition from Fortran to C++ (eg. reverse engineering of existing Fortran) be done
  • 2. Performance: New software should provide expected detector and physics performance, the reference being the physics performance TDR, test beams etc.
  • 3. Simplicity and functionality: Aim is to do physics, not software development per se. Need to take into account that essentially all 'end users' are also developers. Each member of Atlas needs to be able to run analysis without support of a software engineer. Every 'end user' needs to invest the effort to learn
  • Contribution of physics/performance groups to software: MC generators (working group chaired by Ian Hinchliffe): will be many of them, must fit the overall architecture, transparent use of all generators, support for multilanguage implementations. We need to give input to the authors (same classes, define common output structures)
  • Detector simulation: Today two lines (full simulation based on Geant, fast simulation). In future, an intermediate step will be needed; possible ways: improved Atlfast, or a simplified Geant based on shower parametrisation. Will be necessary to overlap a Geant produced event with a real data event. COB: Atlfast++ should be used without further development, new version not embedded into Root should be developed
  • Geant4 simulation: need extensive comparisons with test beams for example on shower shapes; Geant3 was good on electrons, but reproduced pions poorly. Hence, it is important that we have a 'module 0' simulation of all detectors in Geant4. Necessary to have another independent hadronic shower package - FLUKA. An interface of Fluka and Geant4 is being written
  • Reconstruction: physics and performance groups should set the requirements, and verify the code; should define the classes (eg. tracks) and operations, taking the combined N-tuple into account. Groups should contribute to reverse engineering of Fortran code, and to migration to OO/C++
  • Interface between Geant3 data (100 GB prepared for physics TDR) and new software absolutely necessary
  • Graphics and event display: Input from physics and performance groups absolutely necessary, lack of communication in the past. Only one package (PERSINT) was available for the physics TDR
  • Analysis tools: Software must be independent of data base vendor, analysis and visualisation package etc. Could go on with PAW for a little while, but a modern tool is attractive to learn new software techniques. Root (more specifically, the 'PAW' part of Root) is ready to be used and looks like an attractive choice. Aim is to take a decision by the forthcoming CSG. In parallel, evaluations of existing packages should go on, but it seems that no other product is really ready to be used
  • Other conceivable contributions: event definition, detector description, understand trigger/event filter requirements, elaborate calibration and alignment, understand event preselection, ...
  • Software effort must be driven by physics goals; main requirements: use past experience, performance evaluation, simplicity and ease of use. Each piece of software must be used as intensively as possible
Discussion
  • There is a Root independent version of Atlfast++ which can be used with LHC++, actually being used by Monarc
  • The relationship between Atlfast++ and Geant4 needs to be understood; Geant4 provides for parametrisations
  • We should make sure there is only one source of detector description, even if Atlfast++ uses very little of this information
  • Work going on in LHC++ to define abstract interfaces for each bit of functionality we are interested in, including the user level
  • The concept of the 'PAW' part of Root is broken - you have to take all of Root, or nothing at all. Root requires that all histogram/N-tuple input be converted into Root format
  • Root could be used in a sort of restrictive way, only permitting N-tuple input
  • Q: Should the transition from Fortran to OO just take the design of the Fortran code, or wrap existing code into C++? A: The possibility for wrapping should exist, but may not be feasible for all packages. N. McCubbin: Emphasis should be on understanding the design and the basic ideas of the Fortran code, not on a line-by-line translation
  • (J. Hrivnac) Disagree with the statement that Persint is the only package which could be used for the TDR. A: That was at least the status as of the time of the physics TDR; no other known package implements, at this point in time, all Atlas systems. The power of an event display for debugging the reconstruction must not be underestimated.
  • Q: Are there any common efforts for generator interfaces? A: Yes, there is a workshop planned in 2000 or 2001
  • Maintenance of Fortran based programs is difficult, since there is little attention by the compiler providers. Also, care must be taken about the phasing out support for the Fortran version of CERNLIB
  • Q: Are there any plans to setup a scheme for the comparison of physics results between Atlas and CMS? A: No, not for the time being
  • Request for evaluation of LHC++ and Root has been outstanding for years
  • Q: Confusion on what the intention is about the preliminary framework, and wrapping existing Fortran into it. A: That can be persued, but may not be absolutely necessary because the TDR suite will be kept running. Code freezes may not be entirely feasible on the Fortran software - changes on the geometry may be required in the simulation and reconstruction, but should be restricted to absolutely necessary cases
  • Some failure of communication, and misunderstanding, about the term 'reverse engineering'
  • Persint has actually been running on more platforms. In order to ensure long-term maintainability of Fortran code, one should allow for Fortran 90 to be used
  • Allowing for all Fortran 77 to be wrapped may imply an undue amount of work. A: It would be very important to be able to compare immediately the results of Fortran and C++ algorithms
Tue 11:20 Topic D. Barberis: Quality control group
References Slides: HTML, PS, PDF
Summary
  • Two meetings held so far
  • Mandate: evolve software quality control requirements, minimising the overhead, in particular in the beginning
  • Composition: Makoto Asai, Dario Barberis, Martine Bosman, Bob Jones, Jean-Francois Laporte, Maya Stavrianakou
  • Long-term targets: good design (inspections, reviews), good code (testing procedures), good documentation
  • Relax some requirements on coding rules and design documentation for non-core software for the transition period
  • Basic idea: software following an onion model (kernel software used by everybody, outer layers are user group specific), higher demands on software quality of kernel software
  • Coding rules: take Spider rules as a starting point, classify them by importance, look for examples of good and bad code illustrating all rules. Applicability to be defined as a function of importance of the rule, and 'centrality' of the software. Exceptions possible, but need to be well justified and documented (eg. interfaces to external packages)
  • Software process: long term, ask for design document, user guide, reference manual. Short term, for non-core and existing software, ask for user guide at least, describing the interfaces of the package. Validation by inspections, walk-throughs, reviews, tests
  • Timescales: paper on policy of quality control by October, coding rules by beginning of November, software process by end of the year
Discussion
  • Q: Relaxing coding rules for Fortran was a big mistake. Strict guidance on C++ is much better. A: The group's mandate is explicitly different
  • The key point is that the support is there to tell people whether they have done their stuff correctly
  • There are significant problems with the code checking tool right now, a market survey is underway now
  • All pieces of software used for physics results should be given the same amount of attention
  • Q: Where is the boundary between short-term and long-term? A: Open for discussion in the community
  • Q: What about design rules? They are even more important. A: This will require strong interactions with the ATF
  • Q: There is lots of experience out there with C++, should profit from that for our coding rules. A: This has been done by the Spider project
  • Q: Will the current Fortran software be excluded from these requirements? A: Yes
  • There will be a need for an ongoing activity on quality control beyond the mandate of the working group. Recommendations should be made by the QC group
  • Criticism of onion model in terms of simplicity, and in terms of migration of packages
  • Quality assurance must be understood as an assistance to users, not as control or as an additional burden
  • Testing plan will emerge from the QC group
  • Q: What about recommendations about libraries to be used etc.? A: Not for the QC group to provide recommendations, it has been brought up in the ATF
  • Q: Experience has shown that there is a strong correlation between metrics on Fortran code, and the number of corrections to the code necessary. Are metrics being considered by QC group? A: This depends on the tool support
Tue 12:00 Topic C. Onions: Training
References Slides: HTML, PS, PDF, PPT
Summary
  • Focus is on OO/C++ training currently
  • Training contact people: Names given by a number of countries, but little interaction; some important countries missing
  • Courses for coordinators/task leaders (hands-on analysis, design and programming with C++): one in June, another being organised for 20...24 September. Another one (weeks of November 22 or December 6)?
  • Other courses: in US, given by ObjectMentor, Geant4 courses very popular; follow-up of introductory C++ OOAD course (use of STL, C++ pitfalls, workshop on design patterns)
  • Consultancy volunteers: Wim Lavrijsen, David Malon, RD Schaffer, Lassi Tuura. Use atlas-sw-developers@atlas-lb.cern.ch
  • UCO in building 40: now open 10 - 12 am on Monday, Wednesday, and Friday, including the book shop
  • Training web pages: updated weekly, accessible from Atlas home page. Recent additions: C++ syntax guide, C++ Standard Library references, OO glossary, links to commercial Web-based training
  • Other activities: preparation of C++ training guidelines, 1st draft on the Web; training needs of Atlas and CMS discussed with consultant, preliminary report on the Web; discussion on C++ tutorial series by IT division (similar to existing Java series)
  • Next goals: make Web-based training available via CERN technical training (some problems with platform independence); identify good physics examples; design and code walkthroughs to be organised (taking iPatRec as an example); finalise C++ training recommendations; complete training contacts list; follow-up courses; compile HEP glossary; small stock of training material (budget?)
Discussion
  • Q: Would be nice to include hints on debuggers, profiling tools, emacs modes etc. on the Web page. A: Perhaps this should better be put under tools
  • Q: Some activity indeed going on in outside labs, eg. Paul Kunz giving his C++ course in Munich. A: Please keep the training coordinator informed
Wed 09:00 Topic S. Haywood: Architecture task force: General report
References Slides: HTML, PS, PDF
Summary
  • Membership (see slides), mandate: specify global architecture allowing for partitioning of the software effort
  • Our understanding: model for our way of working given by LHCb (Gaudi) that documents their architecture very well. Architecture is a design description on a piece of paper; a framework is the implementation of core features (infrastructure) in code
  • Community eager to hear suggestions and decisions, partly touching on responsibilities of the Quality Control group
  • Report to be given by end October, containing the outline of the proposed architecture. Framework implementation to be taken over by another group after the end of the ATF mandate. Design of domains can partly start independently from that
  • Examples from Gaudi document: data flow diagram of the main tasks, building blocks (correspondence to usual terms), software structure (layers: basic libraries, frameworks and toolkits, applications)
  • Four meetings so far: mandate, aims, time scales; first thoughts; input from DIG working group; Gaudi; use cases; standard libraries; prototyping; event filter and DAQ; CDF, D0 and BaBar architecture and event model; graphics; object networks; Atlas event model, detector description; comparison of Gaudi with AC++ (CDF/BaBar); work plan
  • Observations: Separation of data and algorithms; separation of transient and persistent data; importance of event data model; use cases important to understand community's needs, stimulate design process and test it; event filter needs to be considered; possibility of migration to a different implementation languages; scripting languages; need for a working group with a Chief Architect; ...
  • Work plan: interaction with users, collect use cases; pursue OOAD; compare our design with other groups' ones
  • Control (application manager, steering) is critical area. LBL group very interested, will do prototyping and comparisons of different approaches
  • Output so far: Proposals for standard libraries; prototyping for reconstruction/simulation (Chaos, Paso) - clear that this is a very temporary solution
  • Report: Workings of ATF, architecture proposal (decisions, alternatives); associated guidelines (libraries, tools); scene for identification of work packages. Need to provide a 'vision' which all Atlas can embrace, needs to contain substance in order to provide solid foundation
  • Feedback from the community is very valuable; keep in mind that ATF is focusing on the infrastructure
  • Check the Web site of the ATF
Discussion
  • Q: What if ATF's conclusions are similar to what other groups have done? Should we take over major parts of the code? A: Yes, in the extreme case, that can be envisaged
  • Q: Could you comment on the possible migration to Java? A: We should be open for a complete change as well as for a partial use of Java in our system
  • Q: What scripting language should people learn? A: ATF has not yet addressed this point
  • Q: In the past, DIG was supposed to define the interfaces between domains; ATF does not seem to address this. People need a definition of the interfaces now. A: ATF is concentrating on the core part of the software, likely this is to be pursued by the architecture working group following up from ATF. It is however acknowledged that this is a key question, in particular the definition how the information is passed between modules
  • Q: Is a real collaboration with another experiment on a framework conceivable? A: There is no general policy which precludes that; we have to see how existing designs match what we want
  • At the moment, the 'Chief Architect' is an idea which is generally considered a good idea, but there are no concrete thoughts about that yet, and no person has been pin-pointed
  • Separation of data and algorithms, and the definition of how to pass information are key issues
  • Framework should not be our prime occupation; provided there is significant overlap with other groups' designs, we should seriously consider a collaboration rather than go ahead with our own implementation
  • Q: How does it formally work - is the report a decision or a proposal for a decision? If the latter, how is a decision going to be taken? A: Hope that discussions are already taken into account to some extent in the final report; clearly, the working group will need the freedom to slightly change it if needed, but the report is hoped to be a firm recommendation and basis for future work. Formally, the report goes back to the Executive Board, probably via the CSG. There should probably be a draft document open for public comments for a little while
  • Q: Has D0 been considered? It may be useful to understand the differences with respect to CDF. A: No, we have not looked much
Wed 09:50 Topic K. Amako: Use cases and scenarios
References Slides: HTML, PS, PDF, PPT
Summary
  • Very sketchy, partly representing the speaker's personal view only
  • Mail to system software coordinators asking for use cases ("What do you want to be able to do?"); replies received from calorimeter, inner detector, muons, jet reconstruction
  • Use case describes a way the user uses the software system; captures functionality of the software system as seen by users; sort of a way to document the requirements. Consists of use case diagrams and statements; model described in plain English with simple diagram notation
  • Use-case driven: use use-cases as primary artefact to design and implement software system, promotes the separation between user and developer
  • Need to clearly define users and developers; physicists are sometimes users, sometimes developers
  • Observation: replies of systems talk about use-cases of almost all activities in Atlas data processing and analysis in offline and online, hence domain area is huge. Domain decomposition required; activities in domains are not independent from each other due to common tools, data presentation, detector description, data storage and retrieval procedure, ... Some of these commonalities must be unique, for others there may be different implementations
  • Propose to call the lower layers of the Atlas software structure the common framework
  • Strategy for deriving a design: need to define the actors (=users) to arrive at a use case model of the Atlas framework. To that end, serious interactions with designers of system and physics software required to arrive at candidate use cases. Use case model is essentially equivalent to a requirements document. Proposed steps: requirement document, object-oriented analysis, high-level class/object models and interaction/collaboration diagrams, definition of packages, signatures of major methods of high level classes, ... All this needs to be an iterative process
  • To establish a software development process is essential for a huge and complex software system such as the Atlas one. Very good experience made in Geant4. Most developers, however, would not need to know about it
  • Goal: at end of ATF, analysis model of Atlas framework, design documents of use-case model and analysis model in plain English and simple (non-UML) diagrams; being worked on now by a few people nominated by ATF
  • When can we see real code? Iterative procedure implies an early implementation
Discussion
  • Q: Is the question of a graphical user interface taken into account? A: Yes, but that is not a major issue
  • Q: Calling for use cases is considered the right way, opens the path for democratic development. Feedback on the use cases is highly desired by those who have delivered them. A: The next round of use case collection should only be done after face-to-face interactions
  • Q: Experience: Use-cases hopeless for design proper, but useful to check them. A: This is true in terms of the tools, not in terms of the concepts
  • The Object Networks look very much like data flows, not particularly object-oriented
  • Q: Not knowing about the software process is very dangerous for the developers. A: The software process referred to here is the methodology, not the quality control
  • Q: How many use cases do you expect to collect finally? A: Will grow with time, will be published shortly
Wed 11:05 Topic D. Candlin: PASO
References Slides: HTML, PS, PDF, PPT
Summary
  • Framework: you do what you are told, it's a black box; skeleton: you do what you like, but skeleton could help you
  • Paso: one-day adaptation of getGraphicsEvent; light skeleton with traditional execution path. Code to be plugged in in three places: initialisation, event handling, termination
  • Provides data access (RD's Event package allowing for access to Geant3 produced digits), and visualisation (Julius' interface to the Atlas graphics packages)
  • Some re-arrangements and comments in the code, user guide (see Computing -> Applications)
  • Hits are still missing
  • TRT_Rec being moved to Paso, shaking the bugs out of the latter, iPatRec and xKalman++ being migrated. Most importantly: user modules
  • No scripting, no GUI, simple facility for parameters (a la FFREAD) to come soon
  • How to use it: Check out Paso, add MyModule.h and MyModule.cxx to Paso subdirectory, edit paso.cxx to put call to user constructor, run gmake
  • Linking takes very long for the time being, far too much stuff linked into. Solutions being pursued: dummies, shared libraries
  • Data Ids and ranges: organised using hierarchical string identifiers
  • Graphics: using standard Atlas packages, any 'plottable' can be displayed on any 'scene' - event displays as well as histograms etc. Each plottable/scene combination requires a rep
  • Paso currently running on HP and Linux
  • Demonstration given, geometry problem found
Discussion
  • Q: It would be useful if the user documentation contained the commands to be used for SRT. A: It does
  • Q: It would be useful if Paso had access to Zebra banks with reconstructed quantities. A: Yes, but the current event is not designed such that anything can be put in and anything can be read out. Some design work would be needed fitting it into the event data model
  • Creation of text, VRML, XML, histograms etc. possible as well
  • Q: Are there plans to get data from Geant4? A: As far is Paso is concerned, the standard event interfaces are used. Paso does not care where the data come from
  • Q: What is the possible gain from going to shared libraries? A: No idea. Good experience in Staf, though
Wed 11:35 Topic C. Tull: Comments on control/framework
References Slides: HTML, PS, PDF, PPT
Summary
  • Group of people at LBL interested in these issues
  • Challenges faced due to CPU requirements, software complexity, distributed development, ...
  • Large data volumes, globally distributed collaboration, long-lived project, large complex analyses, distributed heterogeneous system, commercial software and standards, evolving technology, OO, legacy software and programmers, limited manpower, most of which are not CS professionals
  • Control: part of the infrastructure which ensures the right piece of software runs at the right time, taking the right input and putting it to the right place
  • Framework: definition from ATF
  • Many solutions on the market, little distinction is made as to their functionality
  • Concepts: Finite state machines, action on demand, stream/record/frame, simulated data flow, object network, software bus, mobile agents, C++ interpreter
  • Still some degree of commonality around: Component design, data distinct from algorithms, physical design considerations, data flow part of algorithm interface, observer model for algorithms
  • Other design principles: code generation tools, dynamic loading, code -> script -> GUI
  • SWIG: Number of candidates around, IDL attractive because of its simplicity
  • Control state: every component has state methods which communicate with a global state manager
  • XKalman run within Staf, access to Zebra banks via IDL
  • Design and prototyping: requirements documents, use case scenarios and behaviours, market survey (framework/control architectures, software/hardware technologies, other software projects, leverage manpower and expertise)
  • Common HEP behaviour patterns: reconstruction, data mining, data prospecting
  • Control framework development must be heavily front loaded - framework must exist before all the rest...
  • Guess about expectations: physics TDR software runnable and usable, SW tools for integration, functional user interface for non-distributed application, natural interface to analysis tools (note the plural!), working interface to data
  • Technical decisions: Take the best of all worlds
Discussion
  • Q: Is the IDL only for the data description, not for the methods? A: Yes
  • Q: What is meant by software tools for integration? A: A full bunch of stuff, see the detailed slides that were not discussed at length during the talk
  • Q: Concerning Staf, modularity is not strictly respected - there are some 10'000 lines of Kuip outside the modules. A: Indeed, modules controlling other modules (supermodules) not yet implemented. This goes back to the question how the communication of modules should be done (large temporary event store vs dedicated small data objects)
  • Q: What exactly are you proposing IDL for? This should all depend on fundamental architecture choices. A: Using IDL opens the road for later migration to a real ORB
  • Q: We are frightened by that - basically it sounds like the advertisements for Age. A: These are really different things - IDL, Corba etc. are industry standard, widely used and fully supported by commercial and free products
  • Object definition in IDL would be very useful for Graphics, as much of the interface work could be automated
  • LBL expertise could be very helpful for the definition of the abstract interfaces aimed for by LHC++
Wed 14:10 Topic J. Deacon: The strengths, weaknesses, opportunities, and threads of object technology
References Recording (see instructions)
Summary
  • Why change the way we write software? Writing software is difficult, but maintaining it is much more so. Economic benefits of OO are not immediate; on the contrary, the initial investment is even larger than with conventional paradigms. The real benefit comes later in the lifetime
  • Software is growing ever more complex, this problem needs to be address
  • Economy by extending the software's lifetime through improved maintainability, not so much by time to market or re-use
  • Currently software is difficult to change, difficulty raises asymptotically. Aim is to get into the asymptotic behaviour as late as possible
  • Maintenance is complex; covers correction, modification, extension, evolution, preservation. Maintenance is 70% of the overall software effort, or even more. Begins long before delivery
  • Handling complexity: reliability, robustness
  • Software often not fit to its purposes
  • Objects allow for self-contained components. Earlier attempts (subroutines, data structures) suffered from not being scalable. Objects bring data and behaviour together, allowing for opaque encapsulation and offering an organising principle
  • Re-use: sometimes used as an argument for object technology, but is often understood to make use of inheritance, which is a doubtful concept. Should not count on re-use too much. Usually aggregation more successful for re-use than inheritance
  • Origin of objects: people are able to cope with 7 +/- 2 problem pieces at a time. Encapsulating complexity helps to meet this problem
  • Evolution of the chunk: from bit patterns for machine instructions and data, via high-level languages and data structures, towards objects
  • Objects usually hide data, only expose behaviour, hence allowing for a transparent change of the data representations. Object that responds to a message to decide how to do this, not the client sending the message
  • Object is a meaningful software agent, characterised by providing a well-defined set of operations. Usually, objects have states and identity independent of the state
  • Object-orientation: characterised by abstract data types (I am what I do), message exchange, polymorphism, inheritance (specialisation without changing the client)
  • Essentials: Inversion (put the data in charge rather than the routines), encapsulation and information hiding (interface vs implementation), messaging; nice to have: object types and conformance, classes (abstract classes to define the types, concrete classes to implement them). Late binding (type identification at run time) required for polymorphism to work, costs a little in performance, and language sophistication needed for type safety
  • Inheritance of implementation: nice to have, BUT not straightforward to use. Private inheritance is not object oriented; different from inheriting an abstract interface. Another way of sharing implementation: composition
  • Programming languages: Simula - the beginning; Smalltalk - the way; C++ - the all pervasive; Eiffel - the fresh; Ada95 - the inevitable; Java - the saviour. Others: Objective C, Object Pascal, Object-oriented Cobol, Self, Actor. Clear market winner: C++; very successful: Java, again done afresh. Smalltalk: "clearly the answer, but what is the problem?" C++: C with Simula-influenced object-orientation, big language with many different overused keywords, difficult to learn, not really enforcing object-oriented programming; has not got a garbage collection, programmer in charge of managing the memory, which is a source of problems
  • (to be completed)
Discussion
  • Q: It is now recommended to separate data and algorithm objects; is this consistent with the basic OO ideas? A: Strictly speaking no, great care needs to be taken in the design not to embark on the dangers of a complex language without getting the benefit
Thu 09:00 Topic M. Stavrianakou: Repository and releases
References Slides: HTML, PS, PDF, SDD
Summary
  • Production software: Fortran/Age mostly stable, C++ evolving
  • OO/C++ software evolving steadily, not all in official releases, but included in nightly builds
  • Supported platforms: HP-UX, Alpha-DEC, IBM AIX, Linux, Sun Solaris. Not all packages supported on all platforms
  • Can we phase out some of them? What about IBM AIX? Can we stop building software by end 1999? HP-UX will be supported at Cern until end 2001; when can we stop building software for it? Re-evaluate once dropped by Atrig?
  • Not currently supported at CERN: SGI (partly supported by Boston), WNT
  • Releases: 32 fortnightly releases in 16 months. Quite time consuming
  • Nightly builds (from the head of the repository): Fortran part usually builds, failures in OO/C++ software; could and should be used by developers for early debugging (see Lassi's build analyser). Should we aim for successful nightly builds? Automatic notification has its pros and cons
  • Production release based on cvs/SRT: very little testing has been performed so far, updates still in progress. Need all production executables (dice, dicepyt, atrecon, atlsim, atlfast, maybe atrig) at least on HP and Linux, perhaps Solaris. Commitments needed for testing - aim for end October 1999?
  • Issues: Packages to be added: xKalman++, Geant4, other. Geant4 is problematic since it uses a different naming scheme, but something needs to be done since there is no reliable binary distribution around
  • Generator packages: completely outdated, hoped to be taken up by MC group
  • Releases both in debug and optimised mode? Probably requires modifications to SRT to allow for re-use of binaries and libraries
  • Package author list not up to date
  • Test area: Removal to be scheduled immediately after production release, freeze (read-only access) earlier? Technical problem with age files to be solved
  • Shared libraries for CERNLIB: where will this be stored?
  • Web pages need re-organisation and update
  • SRT: some improvements necessary (documentation, functionality, long-term maintenance and support)
  • Where do all these questions get discussed?
Discussion
  • Question when support for a given platform can stop will be discussed in the National Board
  • Common sense suggests that code checked in should have successfully been compiled on at least one platform
  • Automatic notification of failures in nightly builds is considered a good idea
  • The simulation packages have not been changed; changes are mostly on atrecon. 0.0.18 and 0.0.19 have been heavily used for the physics TDR, should be stamped as a production one. However, some private code has also been used
  • The crucial point for checking is the simulation, since all production had been done based on the CMZ version
  • Q: Are there canonical plots or standard Kumacs for testing which can be used to record the results?
  • Production release must be stamped before the changed muon geometry gets implemented
  • There are just two people using the test area actively
  • The problem with the conversion to cvs/SRT is Atlsim itself, the makefile of which is orthogonal to the cvs/SRT ideas
  • Why don't we ask the author of Atlsim to implement his package into cvs/SRT?
  • There will need to be a group of people to worry about these technical questions, suggestions to Norman welcome
  • Must not incorporate Geant4 into Atlas repository, but push IT to provide working binaries
  • What about package documentation? Should we insist that it be included into the package itself, in which case it would impose restrictions? HTML looks like an attractive solution, QC group should address this issue
  • Important that all concerned communities are involved when important decisions are taken
Thu 10:10 Topic S. Fisher: Tools
References Slides: HTML, PS, PDF, PPT
Summary
  • Atlas Release Tools: Requirements document completed, review completed, document will be revised, and existing candidates will be checked against the requirements, will result in a recommendation on a course of action
  • Code checking: coding rules paper has been completed, IPT looking for a checking tool. CodeCheck has proved inadequate, cannot deal with STL, vendor not responsive. Steve and Maya involved, asked as individuals. Hope to complete study by Christmas (recommendation for a tool)
  • Together (lightweight case tool) recommended last time, trial license for 30 days now available. Should now be tried. Attractive since it couples code and design development. See Web page
  • Atlas Computing Web: top pages are currently built by a script which takes fragments from outside the Web, adds standard header and footer, creates list of last modified pages. Problem: non-standard input files to edit - no validation tools, must follow rules, heavy procedure for small changes. BaBar has something with similar aims, developer edits page in place, change gets detected. Problems: original HTML cannot be re-generated as it was; many identical fragment files to be maintained. Proposal for Atlas computing: Fragment replaced by standard HTML, script does not modify HTML. CGI could be used to generate the navigation bars - author's HTML would never be changed. Examples on the Web
  • Other Web tools: tidy (on Asis, tidies up HTML code), HTDIG for index creation, Linbot to detect broken links and find recently changed pages
  • Plans to complete and deploy the CGI script, deploy Linbot and Tidy, update Computing Web
Discussion
  • The time needed to code the rules into the coding rule checker must not be underestimated
  • Changing URLs automatically is problematic for references from outside the system
  • Q: Would people be able to continue providing their private Web pages on AFS? A: Yes
  • Q: Can the tools automatically replace URL's (eg. wwwcn -> wwwinfo)? A: Not sure
  • Q: There are several services available through the CERN Web Office. A: They only support pages served by the CERN server
  • Q: Is it intended to also revise the physical structure of the CERN based pages on AFS? A: That should probably be done for all Atlas Web, not for Computing on its own
  • Q: Is there an Atlas Webmaster? A: Not currently, the management is looking for somebody
  • We need to revise the content and structure of the Computing Web
Thu 11:00 Topic D. Malon: Database and TileCal pilot project
References Slides: HTML, PS, PDF, PPT
Summary
  • Goal: Support 1999 test beam analysis using OO technlogies
  • Status: OO implementation of logical model of tile cal, detector-centric data access to 1998 and 1999 raw data, hot-swappable custom calibration strategies, access via Paw and Kuip
  • Detector-centric view: detector is primary, events are stimuli. Keep the event as opaque as possible. TileElements are only created on demand
  • Persistent model separate from transient one (user code must compile without Objectivity), no modification to existing persistent classes for raw data. Various three-layer models for transient-persistent conversion studied
  • Existing examples: ShowEnergiesAndAFewADCSamples, CompareCalibrations, CalibrateRun, CalibratePartialRun, CreateSimpleNtuple
  • Code sniplets shown for basic event loops
  • Next: establish testbed for core technologies, in particular for data base; make it more useful for TileCal data analysis (beam data, high voltage info, muon walls, ...)
  • Generalisation: Geant4 TileCal simulation as alternative data source, extension to other calorimeter testbeams - both requiring revision of raw data model
Discussion
  • Q: Can the examples be made available publicly? A: High priority to put it into the repository
  • Detector-centric view looks like a valid and interesting approach, may have some impact on the role of the event which gets de-emphasized by this approach
  • Q: Is it intended that this stuff will be used as primary way to analyse the test beam data? A: Yes, this will allow comparisons with the old way of analysing the test beam for the 1998 data. For 1999, the OO data access is easier than the conventional one
  • For the time being, a limitation is the missing recommendation for a new analysis tool. People are bound to N-tuples
Thu 11:35 Topic M. Stavrianakou: Analysis tools update
References Slides: HTML, PS, PDF, SDD
Summary
  • Current status: LHC++ has realised that Iris Explorer / HEPExplorer are not accepted by experiments. OpenScientist and JAS considered contributed packages
  • HTL used by some major experiments, Gemini/HEPFitting not widely used, but somehow accepted; Objectivity/DB tags: some use, accepted in principle
  • Other tools: PAW: can read HTL histos from Objectivity; JAS: attractive, but incomplete; C++ interface missing. LHC++ interface to JAS being worked on. Root used by Alice, FNAL, BNL, but monolithic architecture considered a problem. OpenScientist: attractive modular architecture, but one-man show
  • Basic user requirements: Easy access to N-tuple like data, access to more complex data, histogram manipulation, fitting, Paw-like graphics, vector PostScript output, persistence, scripting, remote/parallel analysis, other
  • Software requirements: use of standards (STL, Corba etc), robustness, flexibility, modularity, designed for change
  • Proposal: design for interfaces - can we define a set of abstract interfaces for major analysis components?
  • Starting points: Fitting (Gemini, HEPFitting), histograms (HTL), histogram service (LHCb, Atlas HistoManager), N-tuple like data (JAS DIM, Objectivity tags), other?
  • First steps: select classes to be exposed to users; prepare early prototypes of interfaces, and ask real users for feedback even if essential functionality is missing. User interface should present a Paw-like look and feel for histogram manipulation and presentation. Allow for distributed processing, preserving the possibility of local analysis of small data sets
  • User interface should provide a "monolithic" view of the system to the end user, allow execution of sequences of operations (scripting) and provide history facility; end user should be able to exploit underlying modularity of the system. More precise outline to be presented in HepVis 99, early prototype (empty shell) for LCB workshop
  • What should Atlas do? Physics community wants recommendation now. Recommend Root? Cannot be used without Root IO; does this have impacts on the architecture and design of our software system? Root is not well suited to teach C++. How would we combine this with longer-term efforts with LHC++, JAS, OpenScientist?
Discussion
  • Q: What will the prototype be? A: Probably an empty shell, showing how the user interacts with the system
  • All communication can go via ASCII file, there is no problem
  • Root is able to teach C++, though without CTL
  • In Star, people are writing analysis modules independent of Root. The Star data model is not dependent on Root. A wall can be built between Root and the experiment software
  • Once the data have been extracted from the data base, any tool is fine
  • Would be very nice to be able to do analysis on whatever data set, which requires uniformity of the data store
  • Important to recognise that Root is being suggested as an interim solution, every possible step should be done to make sure it does not creep in everywhere. There should be a plurality of interim solutions
  • If histograms etc. are not considered first-class objects, but different representations, as suggested by the graphics domain, the problem is less acute
  • Q: Can we, and do we want to, ensure that the LHC++ prototype is given enough attention? A: We'll try hard
  • CDF adopted Root as IO system, which has been done in a transparent way. Using Root as analysis tool require inheritance of the infamous TObject - should push for its redesign
  • Root does not allow to treat its different functionalities distinctly; the concept of building a wall between the experiment and Root is more sensible. The important thing is to go for abstract interfaces
  • Should address the question whether to recommend Root rather than letting everybody go ahead and use it privately
  • Not enough difference between Paw and Root in functionality to justify a recommendation for Root
  • Q: What will be used in the LHC++ prototype for persistency? A: It will not have any persistency yet
  • Situation is uncomfortable that people are using Root, but Root is not official policy. On the other hand, one must be very clear about the dangers
  • Problem is that we don't understand whether Root will have an impact on our architecture and design; should make some effort to have a more detailed understanding
  • We should make a clear statement of support and participation in the LHC++ effort towards abstract interfaces of analysis modules
Thu 12:20 Topic N. McCubbin: LCB workshop in Marseille
References Slides: HTML, PS, PDF, PPT
Summary
  • Open attendance, registrations accepted until 18 September
  • People interested to attend a Compaq non-disclosure presentation on high-performance computing to contact Norman
Fri 09:00 Topic J. Hrivnac: Graphics
References Slides: HTML, PS, PDF
Summary
  • Report to ATF
  • Atlas SW organisation, role of Persint, 3D event display, context sensitive reactions, ...
  • Trying to establish graphics contact persons in systems
  • Data: something (no space points though) in ID, space points for muons from Cobra, nothing from calos, everything from trigger
  • Aravis: InDetGraphics now available, problems with SceneList and Collections resolved; to do: menu for operations, show/hide menu, interactive changes of properties; perhaps later callback (refit etc), picking
  • Atlantis: much more difficult than Dali (corresponding display in Aleph); solutions: radial fish eye, angular fish eye, V-plot. Needs data after (pre-)reconstruction. Standalone program, data exchange via XML
  • Persint: primarily for muons, other systems available. Interface: XML, will be available on all platforms, including sources
  • XML: agreed with Wired, Atlantis, Cobra, Persint (partly under development); DTDs for Event, DetectorDescription, Structural definitions; use XML for all kinds of ASCII data exchange?
  • Design: improved for graphics control, unformal review to take place
  • HepVis 99 in Orsay next week, still open for registrations
Discussion
  • Q: Is the program sufficiently well defined to only take part in parts of the workshop? A: Yes
Fri 09:20 Topic RD Schaffer: Data base, detector description
References Minutes of data base meeting with pointers to material: HTML
Summary
  • Detector description: one single source, infrastructure needs to be provided: general ASCII format (proposed: ODMG's ODL/OIF), compatibility questions with XML. Lots of contacts with systems necessary
  • Data base domain: understand the scope, key issues, existing stuff, plans for the next weeks/months
  • Scope of DB domain: underlying data base technology and associated infrastructure; on top of that, event data store, detector data store, others(?). To be addressed: event data model, event collections, event analysis and query services; detector description, calibration and alignment
  • Data base infrastructure: transient/persistent mapping (general solution?), distributed DB development, distributed data access, schema evolution, interface to control
  • What exists today? Identifiers, logical identifiers, transient event, sufficient detector description to make digits useful, event interface to allow for partial retrieval of an event (based on identifier ranges)
  • How to proceed: address generic transient/persistent mapping, and the design what is going to be stored
  • Hits still missing, but should not be hard to do
  • Another possible area of work: mapping onto online event format
  • Experience with Objy: Star has given up using it as event store, now using Root I/O. Star requirements in terms of data volume and CPU power a little lower, but at a similar scale as Atlas' ones. Gave up on XDF, considered Objy and RIO. Development cycles with Objy found to be slow, elegant, but not performant, tool. Objy burdens: difficult to maintain a single schema across the experiment; schema evolution unusable; access control late; concerns about delays in porting to new platforms; scalability concerns (as confirmed by BaBar). Root fully adapted as I/O solution for MDC2, somewhat influenced by the earlier CDF decision
Discussion
  • Q: How can the detector description be used by people working with the detector simulation? A: For the muons, we have RPCs and MDTs in the barrel, should be usable fairly soon. Proposed to use one system to prototype the two alternative ASCII solutions. Two concrete models should not be converted into each other directly, but always through a generic model. Some more discussion needed
  • Q: Are the resources available to us sufficient to support a large-scale Objectivity based development? A: Hardware-wise yes, the potential bottleneck are people working on it
  • Q: How well prepared are we to move away from Objy? To which extent is the transient model depending on Objy? A: Today, there is not much dependence; even the question whether or not one should allow for on-demand data access is not strictly coupled to Objy
  • LCB has asked Atlas for our position on fallback solutions for data bases
  • Q: Is it possible to store the same digits in Objy, and retrieve them, as is possible from Geant3 Zebra tapes? A: In principle yes, but the detector description is missing from Objy. Necessary step is to move the detector description out of Age/Geant3/Zebra
  • Useful to redo the 1 TB milestone allowing for more useful access to data
  • Q: Are we going to test alternative solutions to Objy such as Root or relational data bases now, or do we leave that to IT division or RD45? A: We hardly have the manpower to exploit one product sensibly. We should push for one common solution in HEP, evaluating first what is available correctly
  • Root I/O should be taken seriously
  • Q: How many of the Objectivity shortcomings are strictly related to the product, and how many are generic to object oriented data bases? A: No experience
  • Schema evolution indeed a very important question
  • Root I/O again requires that persistent objects inherit from TObject. Most persistency packages will need to inherit from some base class
Fri 10:15 Topic D. Rousseau: Reconstruction
References Slides: HTML, PS, PDF
Summary
  • Status and plans of various systems, including trigger
  • Contributions from people involved in physics TDR; those involved in OO software; those involved in hardware activities. Important to get them together, and to provide tools, documentation etc. on the Web
  • Existing software successfully reconstructing events (Persint display)
  • New challenges: Object-oriented C++, numbers from data bases, online and Geant4 data, trigger awareness, realistic (misalignment, noisy/dead channels, speed, robustness, performance
  • Trigger not going to use offline software, but L1/L2 to be simulated in same framework; event filter to run offline software, do not intend to write their own reconstruction code. Optimisation highly important
  • LAr: first step is to understand exactly what exists (C++ prototypes, reverse engineering of F77)
  • TileCal: No C++ reconstruction yet, but pilot project considered very important. Do not plan to wrap any existing F77 code
  • ID: plenty of C++ packages already, but no real duplication between F77 and C++. Identified commonalities should become common code (eg. clustering). F77 code for vertexing and conversions will need to be wrapped
  • Muons: C++ code exists, but not tested on real events yet. Focussing on interfaces first
  • Event definition: Containers and contents to be defined, for the latter profit from experience with combined N-tuples. To be considered: Combined entities, intermediate entities, semi-raw entities (today's digits). Data to be defined first, then operations. Identified entities will broadly define an event flow. Algorithms in between can be big modules
  • Atrecon needs to be kept alive (detector performance studies with new layout, trigger TP spring 2000, new physics studies, test of new C++ packages against old ones in full chain)
  • Paso: wrapped Fortran or existing C++ to be integrated. Don't spend too much work fitting things which won't go there, rather use it in order to derive requirements for eventual framework. Good to implement slices of final software. Paso should have possibility to read combined N-tuple or Zebra RECB bank. Should encourage to do simple analysis
Discussion
  • For the muons, the objects and interfaces have long been identified
  • Q: Defining entities first, and then operations, is not proper OO. A: Not quite correct - you have to start identifying the entities
  • Reading from N-tuples or from RECB Zebra bank would be a requirement to Event, not to Paso. Modularity is an important design goal. Of course, the Event reading in Paso could be replaced by anything that provides this functionality. N-tuples will probably just take a couple of days
  • Need to become clear about the definition and role of digits and hits
  • Q: Why do we want to restrict Paso for single module testing? A: Currently, no information flow between modules provided by Paso, but could be overcome by dirty solutions
Fri 11:10 Topic S. Rolli: World-wide computing group
References Slides: HTML, PS, PDF, PPT
Slides of WWCG meeting:
I. Legrand: Monarc simulation tool - status and future plans HTML, PS, PDF, PPT
Y. Morita: Validation HTML, PS, PDF, PPT
H. Newman/L. Perini: Monarc plans for LCB workshop and beyond HTML, PS, PDF, PPT
Summary
  • Legrand: Monarc simulation system, described data model and the structure of the program
  • Validation done as combined effort of simulation and test bed working groups of Monarc, the two are largely in agreement
  • Anyone interested should contact Krzysztof Sliwa
Discussion
  • Q: Is the model working now? A: Yes
  • If we want to write something about the computing model, we need to be more serious about these efforts, and not leave them entirely to Monarc. This could be an item for the National Board
  • Effort seems to focus on Regional Centres now, computers doing end analysis somewhat left out
Fri 11:20 Topic H. Meinhard: Summary of talks
References Slides: HTML, PS, PDF, PPT
Summary
  • (see slides, I'm not going to summarise the summary)
Fri 12:05 Topic H. Meinhard: Dates and venues of meetings in 2000
References N/A
Summary
  • Preliminarily accepted dates:
    14 Feb - 18 Feb 2000 at CERN (adjacent to Atlas week and CHEP)
    08 May - 12 May 2000 outside CERN
    28 Aug - 01 Sep 2000 at CERN
    27 Nov - 01 Dec 2000 at CERN
  • LBL expressed interest to organise the May meeting in Berkeley
Discussion
  • Overlap of February Atlas week and CHEP conference is very unfortunate
Fri 12:15 Topic N. McCubbin: Conclusions, closing remarks
References Slides: HTML, PS, PDF, PPT
Summary
  • Interesting, but fatigating, week...
  • Organigram: Body for technical issues required, place for Graphics required
  • Physics requests: Root(Paw)
  • Session with LHCC: LHCC computing review to take place in March 2000
  • Detector description: Urgent need, RD, David Malon, Andrea, etc. to provide their thoughts; actions to be defined next week
  • Data base, Objectivity: LCB asking what RD45 should do, and what CERN/LHC risk-averse strategy be? To be discussed in CSG, opinions to be collected by RD Schaffer and David Malon
  • ATF session: ATF first heard about what has been done elsewhere, now own design and prototyping being launched. Follow-up after end of ATF required. Process will take time; overall architecture warrants good design. Hence Paso as a stop-gap solution
  • Lots of familiar problems in repository and releases... Freezing TDR version will require action from physics groups
  • TileCal OO work: More of this, please!
  • Analysis tools etc: no final product available now; interim plurality of approaches is fine. Software community does not forbid use of any product. Applies also to using the 'PAW part' of Root, but worries about implications on architecture; hence ask users to write N-tuples, then use Root - no direct usage of Root I/O. Plurality also implies support for latest LHC++ efforts
  • Reconstruction: General move to OO; they want more in Paso - good move. Hits required
  • Graphics: wide range of products
  • The PLAN: absolutely needed... for us, fo the rest of Atlas, for the review bodies etc. etc.
  • Thanks to Maya and Helge for organising the farewell party for Jürgen - should have parties more often (eg. before Christmas?)


Helge Meinhard / September 1999
Last update: $Id: minutes.html,v 1.9 1999/09/21 07:12:53 helge Exp $