Cosmic lights in the forest
PRIYA supercomputer simulations largest-ever of Lyman-𝛼 forest spectral data, illustrate large-scale structure of universe
Like a celestial beacon, distant quasars make the brightest light in the universe. They emit more light than our entire Milky Way galaxy. The light comes from matter ripped apart as it is swallowed by a supermassive black hole. Cosmological parameters are important numerical constraints astronomers use to trace the evolution of the entire universe billions of years after the Big Bang.
Quasar light reveals clues about the large-scale structure of the universe as it shines through enormous clouds of neutral hydrogen gas formed shortly after the Big Bang on the scale of 20 million light years across or more.
Using quasar light data, the National Science Foundation (NSF)-funded Frontera supercomputer at the Texas Advanced Computing Center (TACC) helped astronomers develop PRIYA, the largest suite of hydrodynamic simulations yet made for simulating large-scale structure in the universe.
“We’ve created a new simulation model to compare data that exists at the real universe,” said Simeon Bird, an assistant professor in astronomy at the University of California, Riverside.
Bird and colleagues developed PRIYA, which takes optical light data from the Extended Baryon Oscillation Spectroscopic Survey (eBOSS) of the Sloan Digital Sky Survey (SDSS). He and colleagues published their work announcing PRIYA October 2023 in the Journal of Cosmology and Astroparticle Physics (JCAP).
“We compare eBOSS data to a variety of simulation models with different cosmological parameters and different initial conditions to the universe, such as different matter densities,” Bird explained. “You find the one that works best and how far away from that one you can go without breaking the reasonable agreement between the data and simulations. This knowledge tells us how much matter there is in the universe, or how much structure there is in the universe.”
The PRIYA simulation suite is connected to large-scale cosmological simulations also co-developed by Bird, called ASTRID, which is used to study galaxy formation, the coalescence of supermassive black holes, and the re-ionization period early in the history of the universe. PRIYA goes a step further. It takes the galaxy information and the black hole formation rules found in ASTRID and changes the initial conditions.
“With these rules, we can we take the model that we developed that matches galaxies and black holes, and then we change the initial conditions and compare it to the Lyman-𝛼 forest data from eBOSS of the neutral hydrogen gas,” Bird said.
The ‘Lyman-𝛼 forest’ gets its name from the ‘forest’ of closely packed absorption lines on a graph of the quasar spectrum resulting from electron transitions between energy levels in atoms of neutral hydrogen. The ‘forest’ indicates the distribution, density, and temperature of enormous intergalactic neutral hydrogen clouds. What’s more, the lumpiness of the gas indicates the presence of dark matter, a hypothetical substance that cannot be seen yet is evident by its observed tug on galaxies.
PRIYA simulations have been used to refine cosmological parameters in work submitted to JCAP September 2023 and authored by Simeon Bird and his UC Riverside colleagues, M.A. Fernandez and Ming-Feng Ho.
Previous analysis of the neutrino mass parameters did not agree with data from the Cosmic Microwave Background radiation (CMB), described as the afterglow of the Big Bang. Astronomers use CMB data from the Plank space observatory to place tight constraints on the mass of neutrinos. Neutrinos are the most abundant particle in the universe, so pinpointing their mass value is important for cosmological models of large-scale structure in the universe.
“We made a new analysis with simulations that were a lot larger and better designed than anything before. The earlier discrepancies with the Planck CMB data disappeared, and were replaced with another tension, similar to what is seen in other low redshift large-scale structure measurements,” Bird said. “The main result of the study is to confirm the σ8 tension between CMB measurements and weak lensing exists out to redshift 2, ten billion years ago.”
One well-constrained parameter from the PRIYA study is on σ8, which is the amount of neutral hydrogen gas structures on a scale of 8 megaparsecs, or 2.6 million light years. This indicates the number of clumps of dark matter that are floating around there,” Bird said.
Another parameter constrained was ns, the scalar spectral index. It is connected to how the clumsiness of dark matter varies with the size of the region analyzed. It indicates how fast the universe was expanding just moments after the Big Bang.
“The scalar spectral index sets up how the universe behaves right at the beginning. The whole idea of PRIYA is to work out the initial conditions of the universe, and how the high energy physics of the universe behaves,” Bird said.
Supercomputers were needed for the PRIYA simulations, Bird explained, simply because they were so big.
“The memory requirements for PRIYA simulations are so big you cannot put them on anything other than a supercomputer,” Bird said.
TACC awarded Bird a Leadership Resource Allocation on the Frontera supercomputer. Additionally, analysis computations were performed using the resources of the UC Riverside High Performance Computer Cluster.
The PRIYA simulations on Frontera are some of the largest cosmological simulations yet made, needing over 100,000 core-hours to simulate a system of 3072^3 (about 29 billion) particles in a ‘box’ 120 megaparsecs on edge, or about 3.91 million light years across. PRIYA simulations consumed over 600,000 node hours on Frontera.
“Frontera was very important to the research because the supercomputer needed to be big enough that we could run one of these simulations fairly easily, and we needed to run a lot of them. Without something like Frontera, we wouldn't be able to solve them. It's not that it would take a long time -- they just they wouldn't be able to run at all,” Bird said.
In addition, TACC’s Ranch system provided long-term storage for PRIYA simulation data.
“Ranch is important, because now we can reuse PRIYA for other projects. This could double or triple our science impact,” Bird said. "
“Our appetite for more compute power is insatiable," Bird concluded. "It's crazy that we're sitting here on this little planet observing most of the universe.”
TACC’s Frontera, the fastest academic supercomputer in the US, is a strategic national capability computing system funded by the National Science Foundation.
CREDIT
TACC
JOURNAL
Journal of Cosmology and Astroparticle Physics
METHOD OF RESEARCH
Computational simulation/modeling
SUBJECT OF RESEARCH
Not applicable
ARTICLE TITLE
PRIYA: a new suite of Lyman-α forest simulations for cosmology
High Performance Computing Center of the University of Stuttgart and Hewlett Packard Enterprise to build exascale supercomputer
The two organizations announced an agreement to build two supercomputers at HLRS: Hunter and Herder
Business AnnouncementThe University of Stuttgart and Hewlett Packard Enterprise (HPE) have announced an agreement to build two new supercomputers at the High-Performance Computing Center of the University of Stuttgart (HLRS).
In the first stage, a transitional supercomputer, called Hunter, will begin operation in 2025. This will be followed in 2027 with the installation of Herder, an exascale system that will provide a significant expansion of Germany’s high-performance computing (HPC) capabilities. Hunter and Herder will offer researchers world-class infrastructure for simulation, artificial intelligence (AI), and high-performance data analytics (HPDA) to power cutting-edge academic and industrial research in computational engineering and the applied sciences.
The total combined cost for Hunter and Herder is €115 million. Funding will be provided through the Gauss Centre for Supercomputing (GCS), the alliance of Germany's three national supercomputing centers. Half of this funding will be provided by the German Federal Ministry of Education and Research (BMBF), and the second half by the State of Baden-Württemberg's Ministry of Science, Research, and Arts.
Hunter to Herder: a two-step climb to exascale
Hunter will replace HLRS’s current flagship supercomputer, Hawk. It is conceived as a stepping stone to enable HLRS’s user community to transition to the massively parallel, GPU-accelerated structure of Herder.
Hunter will be based on the HPE Cray EX4000 supercomputer, which is designed to deliver exascale performance to support large-scale workloads across modeling, simulation, AI, and HPDA. Each of the 136 HPE Cray EX4000 nodes will be equipped with four HPE Slingshot high-performance interconnects. Hunter will also leverage the next generation of Cray ClusterStor, a storage system purpose-engineered to meet the demanding input/output requirements of supercomputers, and the HPE Cray Programming Environment, which offers programmers a comprehensive set of tools for developing, porting, debugging, and tuning applications.
Hunter will raise HLRS’s peak performance to 39 petaFLOPS (39*1015 floating point operations per second), an increase from the 26 petaFLOPS possible with its current supercomputer, Hawk. More importantly, it will transition away from Hawk’s emphasis on CPU processors to make greater use of more energy-efficient GPUs.
Hunter will be based on the AMD Instinct™ MI300A accelerated processing unit (APU), which combines CPU and GPU processors and high-bandwidth memory into a single package. By reducing the physical distance between different types of processors and creating unified memory, the APU enables fast data transfer speeds, impressive HPC performance, easy programmability and great energy efficiency. This will slash the energy required to operate Hunter in comparison to Hawk by approximately 80% at peak performance.
Herder will be designed as an exascale system capable of speeds on the order of one quintillion (1018) FLOPS, a major leap in power that will open exciting new opportunities for key applications run at HLRS. The final configuration, based on accelerator chips, will be determined by the end of 2025.
The combination of CPUs and accelerators in Hunter and Herder will require that current users of HLRS’s supercomputer adapt existing code to run efficiently. For this reason, HPE will collaborate with HLRS to support its user community in adapting software to harness the full performance of the new systems.
Supporting scientific excellence in Stuttgart, Germany, and beyond
HLRS's leap to exascale is part of the Gauss Centre for Supercomputing's national strategy for the continuing development of the three GCS centers: The upcoming JUPITER supercomputer at the Jülich Supercomputing Centre will be designed for maximum performance and will be the first exascale system in Europe in 2025, while the Leibniz Supercomputing Centre is planning a system for widescale usage in 2026. The focus of HLRS’s Hunter and Herder supercomputers will be on computational engineering and industrial applications. Together, these systems will be designed to ensure that GCS provides optimized resources of the highest performance class for the entire spectrum of cutting-edge computational research in Germany.
For researchers in Stuttgart, Hunter and Herder will open many new opportunities for research across a wide range of applications in engineering and the applied sciences. For example, they will enable the design of more fuel-efficient vehicles, more productive wind turbines, and new materials for electronics and other applications. New AI capabilities will open new opportunities for manufacturing and offer innovative approaches for making large-scale simulations faster and more energy efficient. The systems will also support research to address global challenges like climate change, and could offer data analytics resources that help public administration to prepare for and manage crisis situations. In addition, Hunter and Herder will be state-of-the-art computing resources for Baden-Württemberg’s high-tech engineering community, including the small and medium-sized enterprises that form the backbone of the regional economy.
Statements
Mario Brandenburg (Parliamentary State Secretary, Federal Ministry for Education and Research, BMBF)
“Funded by the BMBF and the State of Baden-Württemberg, the expansion of the computing infrastructure of the Gauss Centre for Supercomputing at its Stuttgart location is an important step on the road to more computing power for Germany’s research and innovation landscape. The unique concept behind the computing architecture at HLRS will ensure that not just science but also industry, SMEs, and start-ups will have first-class conditions developing new innovations. This expansion also means increased computing capacity for the development of AI and a strengthening of Germany’s AI infrastructure, in accordance with the federal research ministry’s AI action plan.“
Petra Olschowski (Baden-Württemberg Minister of Science, Research, and Arts)
“High-performance computing means rapid development. As the peak performance of supercomputers grows, they are as crucial for cutting-edge science as for innovative products and processes in key industrial sectors. Baden-Württemberg is both a European leader and internationally competitive in the fields of supercomputing and artificial intelligence. As part of the University of Stuttgart, HLRS thus has a key role to play — it is not just the impressive performance of the supercomputer but also the methodological knowledge that the center has assembled that helps our cutting-edge computational research to achieve breathtaking results, for example in climate protection or for more environmentally sustainable mobility.“
Prof. Dr. Wolfram Ressel (Rector, University of Stuttgart)
“With Hunter and Herder, the University of Stuttgart continues its commitment to high-performance computing as the foundation of its successful excellence strategy. This expansion will especially strengthen Stuttgart’s leading position in research using computer simulation and artificial intelligence.”
Anna Steiger (Chancellor, University of Stuttgart)
“Supporting cutting-edge science while maximizing energy efficiency is a central concern for everyone at the University of Stuttgart. Hunter and Herder constitute a decisive reaction to the challenges of limiting CO2 emissions, and Herder will deliver not only dramatically higher computing performance but also excellent energy performance.”
Prof. Dr. Michael Resch (Director, High-Performance Computing Center Stuttgart)
“HPE has been a reliable partner since 2019, and we are excited to be making the jump with them to the next order of magnitude in computing performance, the exaFLOP. Using GPU technology from AMD, we are also confident that we will be well prepared for the challenges of the future.”
Justin Hotard (Executive Vice President and General Manager, HPC, AI & Labs, Hewlett Packard Enterprise)
“HLRS has demonstrated the power of supercomputing in research and applied science, and we are honored to have been with them on this journey. We look forward to building on our collaboration to pave the way to exascale for HLRS using the HPE Cray EX supercomputer. The new system will enable scientific and technological innovation to accelerate economic growth.”
Mario Silveira (Corporate Vice President OEM Sales, AMD)
”AMD is pleased to expand our collaboration with HLRS in Stuttgart and HPE. We are providing our cutting-edge AMD Instinct™ MI300A datacenter accelerator to the Hunter project, aiming to enhance performance, efficiency, and data transfer speeds. This initiative will establish a state-of-the-art infrastructure tailored for research, AI workloads, and simulations. Anticipated for arrival by 2025, Hunter aligns with HLRS's ambitious exascale plans for Germany, showcasing our commitment to advancing technological capabilities and fostering innovation together with our partners in the years to come.”
Dr. Bastian Koller (General Manager, HLRS)
“Increasingly it’s not just faster hardware but optimal usage of the system that is the greatest performance factor in simulation and artificial intelligence. We are particularly excited that we have found a globally leading partner for these topics in Hewlett Packard Enterprise, who together with AMD will open up new horizons of performance for our clients.”
About the High-Performance Computing Center Stuttgart
The High-Performance Computing Center Stuttgart (HLRS) was established in 1996 as the first German national high-performance computing center, building on a tradition of supercomputing at the University of Stuttgart that stretches back to 1959. As a research institution affiliated with the University of Stuttgart and a founding member of the Gauss Centre for Supercomputing — the alliance of Germany's three national supercomputing centers — HLRS provides state-of-the-art HPC services to academic users and industry. HLRS operates one of Europe's most powerful supercomputers, provides advanced training in HPC programming and simulation, and conducts research to address key problems facing the future of supercomputing. Among HLRS's areas of expertise are parallel programming, numerical methods for HPC, visualization, cloud computing concepts, high-performance data analytics (HPDA), and artificial intelligence. Users of HLRS computing systems are active across a wide range of disciplines, with an emphasis on computational engineering and applied science.
No comments:
Post a Comment