Jump to content

Leonardo (supercomputer)

From Wikipedia, the free encyclopedia
Leonardo
ActiveNovember 24, 2022
SponsorsEuropean High-Performance Computing Joint Undertaking
OperatorsCINECA
LocationBologna, Italy
Architecture13,824 Nvidia Ampere GPU cores
Power6 MW
Operating systemRed Hat Enterprise Linux (RHEL)[1]
Space900+ m2
Memory2.8 petabytes
Storage110 petabytes
Speed250 petaFLOPS (peak)
Cost€240 million
WebsiteLeonardo Pre-exascale Supercomputer

Leonardo is a petascale supercomputer located at the CINECA datacenter in Bologna, Italy. The system consists of an Atos BullSequana XH2000 computer, with close to 14,000 Nvidia Ampere GPUs and 200 Gbit/s Nvidia Mellanox HDR InfiniBand connectivity.[2] Inaugurated in November 2022, Leonardo is capable of 250 petaflops (250 quadrillion operations per second), making it one of the top five fastest supercomputers in the world.[3][4] It debuted on the TOP500 in November 2022 ranking fourth in the world, and second in Europe.[5]

Architecture

[edit]

The system is constructed as three separate "modules".[6] The first, known as the "booster module", consists of 13,824 Nvidia A100 GPUs, grouped four per node, for a total of 3,456 nodes. This module will be capable of 240.50 LINPACK petaflops, and is expected to be online by autumn 2022. The second module, called the "data centric module", is made up of 1,536 Intel Sapphire Rapids CPUs,[7] and will be capable of 8.97 LINPACK petaflops.[8] These two computing modules will be complemented by a "front-end & service module", and backed by two storage systems; 5 PB of high IOPS storage with 1 TB/s bandwidth and 100 PB of high capacity storage with 500 GB/s bandwidth. The components will be joined up by a 200 Gbit/s InfiniBand interconnect.[9]

Booster Module

[edit]

The 3,456 individual nodes which make up the "booster module" are custom BullSequana X2135 "Da Vinci" blade servers, each composed of:

  • 1x Intel Xeon 8358 CPU, with 32 cores running at 2.6 GHz
  • 512 GB RAM DDR4 3200 MHz
  • 4x NVidia custom Ampere GPU, 64 GB HBM2
  • 2x NVidia HDR InfiniBand network adapters, each with two 100 Gbit/s ports

Each node is expected to deliver 89.4 TFLOPs peak.[10]

Data Centric Module

[edit]

The "data centric module" consists of 1536 nodes, each comprising a BullSequana X2610 compute blade with:

  • 2x Intel Sapphire Rapids CPUs, with 56 cores
  • 512 GB RAM DDR5 4800 MHz
  • 1x NVidia HDR InfiniBand network adapter, with one 100 Gbit/s port
  • 8 TB NVM storage

Front-end & Service Module

[edit]

This module is responsible for login handling, visualisation and system service and management. It consists of 16 nodes, each having:

  • 2x Ice Lake CPUs, with 32 cores
  • 512 GB RAM
  • 1x NVidia HDR InfiniBand network adapter, with two 100 Gbit/s ports
  • 6 TB disk storage in RAID-1 configuration

16 additional nodes are also equipped with:

Storage

[edit]

Leonardo will have access to two storage tiers:

  • A "fast" tier based on 31x DDN Exascaler ES400NVX2 appliances, each with 24x 7.68 TB NVMe SSDs
  • A "capacity" tier based on 31x DDN EXAScaler SFA799X appliances, with 82x 18 TB HDD SAS 7200 rpm and two JBOD expansions per appliance, each with 82x 18 TB HDD SAS 7200 rpm

Funding

[edit]

Leonardo is part of the European High-Performance Computing Joint Undertaking, and receives €120 million in funding from the EU. This is matched by a further €120 million from the Italian Ministry of Education, University and Research.[11]

Bologna Technopole

[edit]

The building housing Leonardo is known as the Bologna Technopole, and used to be home to a tobacco factory.[12] It was built in 1952 by Pier Luigi Nervi, who was famous for his innovative use of reinforced concrete. In addition to Leonardo, the building also houses the European Centre for Medium-Range Weather Forecasts' new supercomputer, consisting of four Atos BullSequana XH2000 clusters, which replaces their earlier facility in Reading, England.[13]

See also

[edit]
  • LUMI, another EuroHPC supercomputer based in Kajaani, Finland.
  • EuroHPC (European High-Performance Computing Joint Undertaking).

References

[edit]
  1. ^ Turisini, Matteo; Cestari, Mirko; Amati, Giorgio (2024-01-15). "LEONARDO: A Pan-European Pre-Exascale Supercomputer for HPC and AI applications". Journal of large-scale research facilities JLSRF. 9 (1). doi:10.17815/jlsrf-8-186. ISSN 2364-091X.
  2. ^ "Leonardo: The European HPC path toward the digital Era". Cineca. October 15, 2020. Retrieved 2021-07-14.
  3. ^ "LEONARDO is inaugurated: Europe welcomes a new world-leading supercomputer". European High-Performance Computing Joint Undertakin (Press release). 2022-11-24.
  4. ^ "Arriva Leonardo, 30 tir per portare il supercomputer del Tecnopolo". la Repubblica (in Italian). 2022-07-21. Retrieved 2022-08-11.
  5. ^ "November 2022 | TOP500". TOP500. Retrieved 2023-04-16.
  6. ^ "EuroHPC Systems Leonardo, Lumi Share More Details as Finish Lines Near". HPCwire. May 21, 2021. Retrieved 2021-08-24.
  7. ^ "Nvidia and Cineca Team Up to Create World's Fastest AI Supercomputer". Tom's Hardware. October 15, 2020. Retrieved 2021-08-03.
  8. ^ "Leonardo, the new Italian answer to the high performance computing demand". ACROSS Project. May 31, 2021. Retrieved 2021-08-24.
  9. ^ Laure, Erwin (January 27, 2021). Towards ExcascaleComputing in Europe (PDF) (Speech). Lecture Series of the Research Center HPC. Universität Innsbruck (online).
  10. ^ "Technical Info | Leonardo Pre-exascale Supercomputer". April 27, 2022. Retrieved 2022-08-11.
  11. ^ "Ricerca: la commissaria UE Gabriel e il Ministro Messa all'hub dei big data del tecnopolo di Bologna". Ministero dell'Università e della Ricerca (in Italian). February 23, 2022. Retrieved 2022-08-11.
  12. ^ "Technopole | Leonardo Pre-exascale Supercomputer". May 9, 2022. Retrieved 2022-08-11.
  13. ^ Smolaks, Max (June 23, 2017). "Italy chosen to host meteorological supercomputer". www.datacenterdynamics.com. Retrieved 2022-08-11.
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy