The ATLAS Collaboration makes use of a world community of knowledge facilities – the World LHC Computing Grid – to carry out knowledge processing and evaluation. These knowledge facilities are sometimes constructed from commodity {hardware} to carry out the total spectrum of ATLAS knowledge processing, from lowering the uncooked knowledge popping out of the detector to a manageable measurement, to producing plots for publication.

Whereas the distributed grid method has confirmed to be very profitable, ATLAS researchers are additionally exploring the potential of high-performance computing (HPC) facilities. HPC harnesses the facility of purpose-built supercomputers from specialised {hardware} and is extensively utilized in different scientific disciplines.

Nevertheless, HPC poses vital challenges for taking ATLAS knowledge. First, entry to supercomputers is normally strictly restricted, with connections to HPC compute nodes severely restricted or non-existent. Second, the processor structure is probably not appropriate for ATLAS software program and the set up of any required native software program could also be tightly managed. Third, the system could solely enable very massive jobs utilizing a number of thousand nodes, which is atypical of an ATLAS workflow. Lastly, the HPC could also be geographically distant from the storage internet hosting the ATLAS knowledge, which can trigger community issues.

Determine 1: Andrej Filipčič (left) and Jan Jona Javoršek (proper) from the Jožef Stefan Institute in Ljubljana, Slovenia, subsequent to Vega. (Photograph: B. Zebec/Izum)

Regardless of these challenges, ATLAS collaborators have efficiently exploited HPC over the previous few years, lots of which prime the well-known Top500 listing of supercomputers. Technological limitations had been overcome by isolating the primary computation from the components requiring community entry, equivalent to knowledge switch. Software program points have been addressed via using container know-how, which permits ATLAS software program to run on any working system, and the event of “edge providers”, which allow computations to run in offline mode with no need to contact exterior providers.

The newest HPC to course of ATLAS knowledge is Vega, the primary new petascale EuroHPC JU machine, housed on the Institute of Info Sciences in Maribor, Slovenia (see Determine 1). Vega started operations in April 2021 and consists of 960 nodes, every containing 128 bodily processor cores, for a complete of 122,800 bodily cores or 245,760 logical cores. To place that into perspective, the entire variety of cores offered to ATLAS from grid assets is round 300,000 cores.

The Vega supercomputer in Slovenia is the most recent HPC to course of knowledge from the ATLAS experiment.

On account of shut ties with the ATLAS neighborhood of physicists in Slovenia, a few of whom had been closely concerned within the design and commissioning of Vega, the ATLAS collaboration was one of many first customers to be granted grants official occasions. This benefited each the ATLAS collaboration, which was in a position to make the most of a major extra useful resource, and Vega, which obtained a gentle and well-understood stream of labor to assist it within the commissioning part.

As proven in Determine 2, Vega was virtually constantly crammed with ATLAS jobs from the time it was activated, and durations when fewer jobs are working are on account of both different customers on Vega or an absence of ATLAS work to be submitted. This huge extra computing energy – primarily doubling ATLAS’s out there assets – was invaluable, permitting a number of large-scale knowledge processing campaigns to run in parallel. Thus, the ATLAS collaboration is heading in the direction of restarting the LHC with a completely up to date Run-2 dataset and corresponding simulations, lots of which have been considerably prolonged in statistics because of the extra assets offered by Vega.

Vega Computing
Determine 2: Variety of Vega processor cores occupied by ATLAS from April 2021 to April 2022, the completely different colours displaying several types of knowledge processing. (Picture: ATLAS/CERN Collaboration)

It’s a testomony to the robustness of ATLAS’ distributed computing techniques that they may very well be prolonged to a single website equal in measurement to all the grid. Whereas Vega will finally be dedicated to different scientific initiatives, a component will proceed to be devoted to ATLAS. Furthermore, the profitable expertise exhibits that ATLAS members (and their knowledge) are prepared to leap on the following out there HPC and exploit its full potential!

Supply :

Leave A Reply