The CMS and LHCb experiments at CERN are moving towards heterogeneous and massively parallel computing architectures for processing and reducing data in real-time. However, the different physics environments of the two experiments affect the design of the heterogeneous event selection systems.
The implementation of an accelerated software trigger system faces multiple challenges: from the software scheduling to the integration with the rest of the DAQ system, from the reconstruction algorithms, which have to be designed to run efficiently on GPUs, to the trigger farm infrastructure.
The speakers will describe how CMS and LHCb redesigned some of the most time consuming algorithms in their online reconstruction (among which are tracking and calorimeter local reconstruction) to leverage the GPU’s massively parallel architecture, allowing both the experiments to increase their physics potential with sustainable costs.
In addition, the speakers will present and compare the approaches, constraints and trade-offs motivating the design of the CMS and LHCb heterogeneous online event reconstruction and selection systems planned for LHC Run-3.