The ATLAS experiment is being made at the Large Hadron Collider (LHC) at CERN. The LHC is a proton-proton collider with the centre of mass energy of 14 TeV.
The specific of the intended physics of ATLAS is in selecting rare predicted processes with high efficiency while rejecting much higher-rate background processes over huge number of channels O(108). Decisions must be taken every 25 ns at the bunch-crossing rate of 40 MHz; at design luminosity (1034cm-2s-1) each bunch-crossing contains about 23 inelastic pp-interactions. On the other hand, the back-end storage rate is limited to approximately 100 Hz (with the average event size of 1 MB) because of computing limitations.
The selecting trigger system deals with the high-rate process analysis by the three-stage architecture: Level1 (LVL1), Level2 (LVL2), and Event Filter (EF). The last two stages (LVL2 and EF) are called High Level Trigger (HLT). The idea of three-stage architecture is in rejecting high-rated background processes at the earliest possible time leaving the complex and slow algorithms for the latest steps. The LVL1 trigger receives full LHC data at the 40 MHz bunch-crossing rate and reduces it then to 75-100 KHz. The LVL2 reduces it down to 1 KHz and during the last processing stage - event filtering - the final data stream is reduced to the acceptable 100 Hz. The trigger algorithms at LVL1 are simple and implemented as a firmware in very fast custom hardware environment (ASIC, FPGA). Algorithms of LVL2 and EF are implemented in C++ for distributed multiprocessor-PC farm.
The LVL1 trigger makes rough evaluations of "interesting" physics signatures on the basis of multiplicities for the following trigger objects with various pT thresholds: muon, electro-magnetic cluster, narrow jet (isolated hadronic tau decays or isolated single hadrons), jet, missing transverse energy, total scalar transverse energy.
The LVL1 algorithms are executed on custom hardware with the decision time of about 2μs (including transmission time). The data from all detectors is transmitted over the pipeline memory arrays for the LVL1 processing and then passed to LVL2/EF over the readout buffers (ROBs). For each found object (e.g. muons, e.m.-clusters) the LVL1 trigger provides its position (η, φ) and the pT threshold, thus, marking regions for the further analysis by higher-level triggers.
The LVL2 trigger is largely based on regions of interest (RoIs). It evaluates the objects identified by LVL1 within the frame indicated by RoI. LVL2 analyses the data of full granularity and of all detectors allow deciding if the event should be selected. The average processing time of LVL2 is about 10ms.
The EF is the final selection step. During the event filtering the full event data is reconstructed (vertexes, track fitting, etc.), events are classified and inserted into database.
The current work is dedicated to development of the trigger system in the groups of LVL1 and HLT.
LVL1 activities involve Jet-Energy Modules (JEM0, JEM1), Timing Control Module (TCM), Common Merger Module (CMM) standalone tests and CAN bus connection to Detector Control System (DCS) of ATLAS.
The idea of standalone tests is in having separate testing programs for firmware and hardware checks over the PC LPT, JTAG, CAN bus connections. The programs for JEM0, JEM1 tests were completed and successfully applied, though the development for TCM and CMM modules is in progress.
The CAN bus connection to DCS consists of development of raw CAN data pre-processing modules (for both National Instruments and Kvaser CAN cards), LabView monitoring, OPC-server connections, and Slow Control development within the process visualization and control system PVSS II.
The HLT activities are the major work. They include the development and tests of HLT software, particularly Steering packages. The Steering, designed at the level of LVL2 and EF, is an environment to drive a running of certain algorithms on subsets of data, as required by a sequence of steps, to classify and select events. The current work on Steering is to test the implemented prototype on stability and performance, especially with data arrays close to the real experimental data. The next step will be to develop the final system.