MINOS Offline Software Requirements (v 09/15/00 )
2 General Offline Requirements
3 Event Display
6 Detector Information
7 Data Base
This document lists the requirements for all of the MINOS offline
packages. The order the packages are listed roughly correspondes to the
order of their dependency; packages listed first depend on packages
listsed after them. This roughly coresponds to the order of importance
of the package to the "physicist programmer". The requirements for the
offline system as a whole are listed first.
The requirements for each package are subdivided into labeled
categories in the order of importance. For example:
A2: Requirements should be listed in this format (1)
Is the second requirement in section A and is of primary importance.
(This format is not yet uniform across all sections).
Each package is listed with the e-mail address for the coordinator
of that package. If you have comments or suggestions for additional
requirements please e-mail the package coordinator. The document is
maintained and edited by
firstname.lastname@example.org. Please send general comments there.
These are the requirements for the MINOS offline system taken as a
D1: The framework should be well documented and easy to learn. (1)
D2: The framework should be easy to maintain. (1)
D3: Code revisions should be tracked and recorded; it should
always be possible to recover the source code used to generate a
P1: To keep pace with the data, the reconstruction should be able
to handle roughly 100 events in 20 CPU seconds. This correponds to 10
CPU's processing 100 events in the time between spills.
P2: The required Monte Carlo simulations for physics analysis
should take no more than one month to generate, track, and reconstruct.
P3: Reconstruction jobs should be easy to move from site to site.
For example, it should be possible to run simple reconstruction jobs on
a laptop computer that is not connected to a network.
Interface through menus and buttons with keyboard accelerations.
Most common display manipulations should be buttons; less common
actions should be in menus.
Ability to define macros.
Should have run/event search mechanism
Should allow user to apply ntuple cuts on events before they are
Supply online help
Should accept data from both online and offline data streams
Should allow user to save selected events to separate file
Support multiple geometries including near detector, far detector,
(calibration module and Soudan II??) events using a similar interface
and be able to change from one to the other on the fly
Allow events to be displayed in multiple projections including:
Ability to rotate and zoom views.
Views should be rendered optionally as wire-frame and solids.
Views should be rendered using multiple graphics drivers.
Ability to display graphics remotely.
Ability to add custom views
Provide drawing tools to allow text, arrows, lines etc. to be
added to the display
Allow for generation of conference-quality hard copies in various
formats including .ps, .eps and .pdf.
Display detector response in both time and charge mode.
View detector response in text/table format
Allow for display of reconstruction information including:
- u and v vs z,
- x and y vs z,
- system relative to beam axis,
in both graphical and text/table formats.
Allow for display of Monte Carlo information in both graphical and
Allow display of common event and user histograms
Provide hooks to allow for special user draw and analysis routines
Ability to drive reconstruction with altered parameters and hits.
- Muon tracks,
- Hadron showers,
- EM showers,
These need to be supplied as part of a coordination effort with
currently non-existent analysis groups.
Packages envisioned to partially fulfill anticipated framework
requirements to support reconstruction activities include:
Any reconstructed object:
- See diagram of examples in event reconstruction hierarchy.
- All concrete Candidate classes inherit from CandBase class.
- CandBase provides framework support for common interfaces and
traits of concrete Candidate classes.
- CandBase manages a list of associated "component" Candidate
objects called "Daughters".
(adapted in part from Babar BtaCandidate classes)
- Encapsulate attributes of reconstructed objects.
- Provide mechanism to Set and Get reconstructed object attributes.
- Allow co-existence of multiple versions of the same Candidate
- Calculated by different Algorithms.
- Calculated by different invocations of the same Algorithm,
perhaps with different Algorithm parameters or for differing event
- Facilitate ones ability to understand or to rebuild a Candidate
object based on history information recorded in each instance of a
- Ability to write out Candidate objects to persistent store.
- Ability to retrieve Candidate objects from persistent store.
- Provision for making a transient modification to an existing
Candidate object, from either a transient or permanent source,
without perturbing other users of the same Candidate object, either
within or outside of the same analysis process or "thread" where the
modification is made.
- Ability to commit a modified transient object to permanent store.
- Machinery to define and test "equality" or "overlap" of
- Automatic deletion of unneeded Candidate objects from memory.
- A concrete Candidate object must be sufficiently configured at
creation that the Candidate is mechanically functional. This
insurance should not depend on user invocation of a separate
initialization method subsequent to Candidate creation.
- The only user access to a Candidate object is via a "Handle"
object, which contains necessary methods for interaction with the
Candidate. Each concrete Candidate type has a specific "Handle"
class to go with it. Several "Handle" instances can refer to the
same Candidate object.
- Candidate "Set" and "Get" methods are in the "Handle" class.
- Candidate constructor and destructor are private. The constructor
is called from a factory method which returns the Candidate "Handle".
This ensures that Candidate access is through the "Handle" only.
- The factory method which creates the concrete Candidate is passed
an "Algorithm" object which is run in the Candidate constructor to
configure the Candidate. This Algorithm can completely or partially
prepare the Candidate, so long as the resulting Candidate is
mechanically functional without further initialization.
- Algorithms can be run subsequent to the creation of the Candidate
to further refine Candidate attributes or to modify the list of
Candidate "daughters". Such an Algorithm is passed the Candidate
"Handle" as an argument.
- Algorithm identity and configuration and Candidate "context" are
recorded in the Candidate object as part of the Candidate history.
- "Handle" object holds a "reference-counted pointer" to Candidate
- Candidate destruction is automatic when "reference-count" -> 0.
- A Candidate object may clone itself if "reference-count" > 1
before allowing modification.
A class which contains the code to calculate or modify the
attributes of a Candidate object, including the "daughter" list.
- Algorithm must be configurable with default parameter set (from
database, e.g.), but allow user to reset parameters dynamically.
- Algorithm identifier and configuration parameters must be
captured by each Candidate object for its history record.
- Algorithm objects must be managed to prevent memory leaks.
- Concrete Algorithm object is "stateless". (It may contain limited
"intrinsic state" which is integral to the Algorithm or expensive to
- Concrete Algorithm object gets its configuration parameters from
an AlgConfig object which encapsulates Algorithm "extrinsic state".
The user can override parameter settings in the AlgConfig object
- Concrete Algorithm object and AlgConfig object are packaged in an
AlgHandle object which is passed to the constructor of a concrete
- Concrete Candidate constructor runs the Algorithm and logs the
concrete Algorithm ID and AlgConfig information.
- Concrete Algorithm objects are instantiated and owned by an
AlgFactory, which loans them out pre-packaged with AlgConfig in an
NEUGEN will be a neutrino scattering application. One common use
shall be as a reliable, modular, comprehensively tested event generator
for use in Monte Carlo simulations. It will be able to provide
additional information related to neutrino scattering in a
G1: The physics should be correct. (1)
G2: NEUGEN will generate neutrino interactions of all flavors over
the energy range 10 MeV - 100 GeV. (1)
G3: All relevant (known) reactions in the energy range of interest
will be included. (1)
G4: The physics content of the generator should be accessible. (1)
G5: Will serve as the event generator for MINOS simulations using
beam and atmospheric neutrinos. (1)
G6: Will be MINOS-independent, should not have dependencies on any
MINOS packages. (2)
G7: Where possible, shall attempt to reuse classes developed for
other large HEP applications (particle, cross section classes?). (3)
F1: Black-box use will be encouraged / supported - defaults for
parameters will be "best values". It should be apparent when the
generator is running in non-default mode. (1)
F2: Should be capable of running in standalone mode for fast MC
simulations, without geometry packages, etc... (2)
F3: Users should be able to examine cross sections for specific
F4: Users can determine uncertainty on cross sections (3).
U1: Should provide an easy user interface to control selection of
physics models and parameters. (1)
U2: Users should be able to determine the physics process
responsible for any given event. (1)
U3: Users should be able to replace packages with their own. (2)
E1: All error and warning messages must be intelligible and
E2: Error messages should provide enough information about te state
of the system at the occurrence of the fault that the condition is
easily reproduced. (2)
O1: The version and settings of the generator shall be recorded
with the data for each run. (2)
O2: Event generation should be reproducible given the same initial
O3: Shall provide distributions of important kinematic variables at
the end of each run or other information summarizing the run as a whole
(#s of NC, CC events, ...). (3)
All models will be: Documented, Verified, Fully Implemented
V1: Documented: A description of the underlying physics will be
V2: Verified: All models incorporated into the generator shall have
their output compared to data where available to determine the optimum
settings for running in default mode. (1)
V3: Fully Implemented: Events can be generated using this model,
users have control over all adjustable parameters. (1)
V4: All information related to the validation or verification of
the package should be contained within the package. (2)
C1: Shall use ROOT as the framework. (1)
C2: Shall run on every system where MINOS code will be supported.
C3: Shall run quickly (to be specified). (1)
This package will provide an interface for users who want to know
the neutrino flux in the regions inside, and surrounding the MINOS
G1: The calculation of the total and differential flux should
reproduce the beam simulations to a few parts in 10^4 as to be neglible
compared to the target of 2% near/far uncertainty. (1)
G2: The interface should be the same for all flux tables and the
same for both detectors. (1)
The interface to the user should use a consistent reference frame.
ie. xyz-axes relative to the detector coordinate system. (1)
G3: The tables provided should include:
G4: There should be interfaces for:
- A "generation flux" optimized for simulations of neutrino
- atmospheric neutrinos
- Total flux given location and flavor
- dPhi/dE given location and flavor
- Random neutrino energy given location and flavor
- dPhi/dcos@/dphi given energy, location and flavor
- Random neutrino direction given energy, location and flavor
G1: Provide an detailed, accurate, fast, and efficient GEANT
simulation of the NuMI beam line
G2: Allow for variations of critical variables (positions,
alignments, currents, etc.) and provide an easy way for users to input
these or to have them read from a data base.
G3: Allow simulation of actual integrated run conditions (horn
positions, currents etc.)
O1: Output should be in ntuple format O2: For each neutrino,
output data should include:
BM1: Provide a run mode useful for tests of beam monitor
- Information to calculate the neutrino fluxes at arbitrary points
- Information to allow the spectrum to be re-weighted to various
hadronic production models
- Information about parent protons and mesons
BM2: Provide a simulation of monitor chambers which can be turned
on/off depending on the application (1)
- Adjustable particle cuts
- Optional non-weighting of secondaries
- Additional output about particle illuminations at the monitor
The B field classes should reproduce the FEA calculation with an
accuracy of a few parts in 10000.
Support mechanism for building and managing simultaneous versions
of the geometry at one time.
Build a geometry native to the simulations framework (e.g.
GEANT3 or GEANT4). At least for
GEANT3 there must be a mechanism that regenerates this as
reference changes from one version to another since
GEANT3 is only capable of managing a single geometry at a time.
Build a geometry compatible with the event display package (i.e. ROOT).
For a given (x,y,z) given in a global coordinate system
determine the material type.
Translate between local (e.g. strip) and global (detector)
Provide information about the detector's configuration:
Provide support for navigation and collections of hits, digits
which are grouped hierarchically into collections by strips, planes,
detector subsections (supermodules) and finally the detector as a
whole. While the geometry package doesn't actually do the work it
should provide a natural means of helping to organize the information.
Provide access (navigational only?) to related information such as
fiber tail and clear connector lengths; MUX associations (fiber
« pixel « tube).
Given a position return the distance to the nearest surface
(change of material).
Future: Given a position and direction find the distance
to the next surface (change of material).
- Plane information
- Number of planes
- Type (and order)
- Active detector
- Number of strips
- Position/length of each
Please see the data base www pages:
and in particular the use case page:
The Job Control shell will provide a framework for analyzing data.
It will provide an interface to the resources provided by the MINOS
framework and to offline reconstruction and calibation modules,
allowing these modules to be organized into analysis "jobs".
G1: The job control shell should work identically on near and far
detector data. (1)
C1: The shell will provide an interface to configure framework
C2: The shell will provide an interface to specify the modules to
be used in the job and their order. (1)
C2: The shell will provide an interface to configure
analysis/reconstruction modules (1)
C3: The shell will provide notification of major state changes,
- The message service
or the equivalent (ie. whatever defines a "run").
C4: The shell will provide access to
- begin of job,
- new run,
- end of run,
- end of job
- abnormal termination
or the equivalent (ie. whatever defines an "event?"). Modules will
have separate entry points for methods that require read-only access to
the data as well as methods that need to alter the data.(1) C5: The
shell will provide access to other module-specific entry points. (2)
U1: Jobs should be able to run batch mode. (1)
U2: Jobs should be able to run interactive mode (1)
U3: Options should be settable from either the command line or
input macro file. (1)
U4: Options should include:
- event headers,
- physics events,
- calibration events and other "non-physics" records
U5: The job control shell will supply a GUI interface (2)
IO1: Allow events that pass all entry points to be written to
IO2: Allow output files to be split according to run number or the
number of events written. (2)
IO4: Allow for partial output of events (2)
IO3: Provide an end-of-run summary of the events processed by each
entry point and other job statistics (CPU time, etc.) (2)
- Files to process
- Number of events to process
- Number of events to skip
- Output file
- Message Service configuration.
- Lists of initialization, event, and end user entry points in the
they will be called
- On/Off/Reverse switches for pass/fail decision points
The Data Distribution package will handle the distribution of data
from the online data sources to client applications. The client
applications will be able to subscribe to certain subsets of data
through an interface provided by the Data Distribution package.
G1: Distributes data from near-online data files to client
G2: Provides user-friendly interface to client for accessing
G3: Use other HEP event servers as models in design where
F1: Clients will be able to subscribe to certain subsets of data.
F2: Clients will be serviced at different priority where
F3: The Server will have access to the data written by the DAQ to
disk soon after it is written and before the data file has been
F4: Data transfer over the internet is supported, so that clients
can be remote from the data sources. (1)
U1: Client interface to data should be user-friendly and
E1: All error and warning messages must be intelligible and
V1: The data distribution code will be tested in a simulated
environment before use in the detector environment. (1)
V2: A "User Manual" describing it's use along with documentation
of the client interface will be provided. (1)
C1: Must support ROOT data format (1)
C2: Client code should run on every system where MINOS code is
C3: Server code should run be written in Posix standard to run on
Linux/UNIX operating systems. (2)
C4: Server code must be fast enough to keep up with the data
This document is a preliminary attempt to list the Data
Distribution system requirements. I have used Hugh Gallagher's Neugen
requirements list as a guide.
Requirements for a generic navigation model are to some extent open
ended. Beyond the essential requirements, the purpose of the model is
to support algorithm development, and this is done in part by
recognizing common patterns of use which are then generalized and
formalized into the model. So the requirements are divided into 3
Requirements can be further subdivided by the type of user:-
- Essential Requirements that have to be satisfied for the
model to be usable.
- Important Requirements that are will probably be satisfied
but for which further analysis or use cases are needed.
- Potential Requirements which may never be satisfied but
could be without breaking either the abstraction or the user interface.
Unless indicated to the contrary, requirements are those of the End
- End User The user navigating, and possibly extending, an
- Developer The user developing new classes whose objects can
participate in navagatable object structures.
- Developer: Minimally invasive i.e the model should place as few
constraints as possible on the class.
- Developer: Add little code overhead to the class.
- Developer: Easy to add to a class.
- Present a simple, intuitive interface.
- Efficient, both in cpu and memory.
- Navigation over all members of a single set by means of an iterator
in a single direction with undefined order.
- Navigation, in either direction, over a 1:n relationship between
members of two distinct sets.
- Navigating a relationship in the direction n:1 gives direct access
to the target of the relationship.
- Navigating a relationship in the direction 1:n is by means of an
iterator of exactly the same type as when iterating over a set. The
order of iteration is undefined.
Henceforth the term Iteration refers both
- Iteration over a single set
- Iteration over an instance of a relationship in the 1:n direction
- Multiple relationships between the same two distinct sets.
- Multiple iterators over the same set or relationship.
- Type safe i.e. all return references to objects are of the correct
type, no unsafe ``casting up the inheritance tree'' is required.
- Sorted iteration with a user defined sort function object that can
- User written at compile time.
- User written at execution time i.e. using CINT.
- Selective iteration with a user specified selection object function
that can be:-
- User written at compile time.
- User written at execution time i.e. using CINT.
- Simultaneous sorting and selection.
- Bi-directional iteration.
- Failsafe iteration under object structure change. That is to say
the iterator should behave as if associated with a null set once the
structure has changed.
- Iterator cloning.
- Stack-only iterators. Forcing iterators to be stack only deprives
the user of one opportunity to create a memory leak!
- Random access iteration i.e. direct access to the object satisfying
some type of selection criteria.
- Dynamic iterators i.e. iterators that attempt to adjust to object
structure changes rather than just being failsafe.
- Multiple level sorting. Used to break possible degeneracy of a
single level sort.
G1: The interface should be comparable and as easy to use as cout
and cerr (1)
G2: Allow print thresholds to be set on a package-by-package basis
G3: Perform with comparable speed as cout and cerr (2)
G4: Handle un-printed messages as quickly as possible (2)
G5: Allow routing of messages to one or multiple output streams
G6: Allow output streams to be concatenated at the end of jobs (2)
F1: Provide convenient formatting of numerical output (1)
F2: Allow messages to be tagged with:
in such a way that this additional information can be turned on/off by
the user on a package by package basis (1)
- message type,
- package name,
- file name,
- line number, and
- CVS version number