Thursday, November 28, 2024

 

Gordon Bell Climate Prize goes to Kaust Frontier Users’ Exascale Climate Emulator



Team awarded for developing highly scalable climate emulator that offers faster, radically enhanced high-resolution simulations without the need for massive data storage



DOE/Oak Ridge National Laboratory

KAUST researchers winners of the 2024 Gordon Bell Climate Prize 

image: 

KAUST researchers were awarded this year's Gordon Bell Prize for Climate Modeling for developing an extreme-scale climate emulator that offers radically enhanced high-resolution simulation but without the need for massive data storage. 

view more 

Credit: Lillie Elliot, SC Photography




The 2024 Gordon Bell Prize for Climate Modelling has been awarded to a team of researchers led by the King Abdullah University of Science and Technology, or KAUST, Saudi Arabia, who used the Frontier supercomputer to develop an exascale climate emulator with radically enhanced resolution but without the computational expense and data storage requirements of state-of-the-art climate models.

Team members also include researchers from the National Center for Atmospheric Research, the University of Notre Dame, NVIDIA, Saint Louis University and Lahore University of Management Sciences.

The winners of the climate prize, awarded by the Association for Computing Machinery, were announced on Nov. 21 at the International Conference for High Performance Computing, Networking, Storage, and Analysis in Atlanta, Georgia.

“This is a tremendous honor and we are extremely proud of our achievement,” said Marc Genton, Al-Khawarizmi distinguished professor of statistics at KAUST. “We believe this emulator will significantly enhance our ability to understand climate events much better at the local level as well as on the global scale.”

“Climate models are incredibly complex and can take weeks or months to run, even on the fastest supercomputers,” he added. They generate massive amounts of data that become nearly impossible to store, and it’s becoming a bigger and bigger problem as climate scientists are constantly pushing for higher resolution.”

Earth system models, or ESMs, are supercomputer programs used to calculate changes in the atmosphere, oceans, land and ice sheets. The simulations are based on the quantifiable laws of physics and are some of the most computationally demanding calculations to perform in terms of complexity, power consumption and processing time. Nevertheless, ESMs are essential tools for predicting the impacts of climate change.

“The climate emulator solves two problems: speeding up computations and reducing storage needs,” Genton said. “It’s designed to mimic model outputs on demand without storing petabytes of data. Instead of saving every result, all we have to store are the emulator code and necessary parameters, which, in principle, allows us to generate an infinite number of emulations whenever we need them.”

Less is more

By leveraging the latest advances in graphics processing units, or GPUs, hardware and mixed-precision arithmetic, the team’s climate emulator offers a remarkable resolution of 3.5 kilometers (approximately 2.2 miles) and can replicate local conditions on a timescale from days to hours.

“Using mixed precision to improve performance is something rather innovative in the field that also helps us preserve the emulator’s accuracy,” said Sameh Abdulah, a high-performance computing research scientist at KAUST.

“Not every element in the simulation needs to be calculated in double precision,” Abdulah added. “Mixing the precision allows us to prioritize the accuracy based on the most important elements, which in turn speeds up the overall calculations.”

The emulator is highly scalable and has demonstrated exceptional performance on four of the world’s top 10 most powerful supercomputers, including the Frontier and Summit supercomputers at the Oak Ridge Leadership Computing Facility. The OLCF is a Department of Energy Office of Science user facility and is located at DOE’s Oak Ridge National Laboratory.

The emulator also performed well on the Alps supercomputer, ranked no. 6, at the Swiss National Supercomputing Centre in Lugano, Switzerland, and on the Leonardo supercomputer, ranked no. 7, at the CINECA data center in Bologna, Italy. The team also made extensive use of KAUST’s Shaheen III, ranked no. 23, the largest and most powerful supercomputer in the Middle East.

“Sustainable computing is another advantage. Getting the answer faster means less storage, which also means saving energy,” Genton said. “Supercomputing requires a lot of energy. By mixing the precision, we reduce the time we need to run, making it more sustainable for climate studies by getting more out of the machine.”

According to Abdulah at the time of the achievement, the next step the team wanted to take was straightforward: “winning the prize,” he said. “And now we’ve finally done it.”

Read the full story, Frontier Users’ Exascale Climate Emulator Nominated for Gordon Bell Climate Prize

UT-Battelle manages ORNL for DOE’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. The Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science


New AI tool generates realistic satellite images of future flooding


The method could help communities visualize and prepare for approaching storms



Massachusetts Institute of Technology



Visualizing the potential impacts of a hurricane on people’s homes before it hits can help residents prepare and decide whether to evacuate. 

MIT scientists have developed a method that generates satellite imagery from the future to depict how a region would look after a potential flooding event. The method combines a generative artificial intelligence model with a physics-based flood model to create realistic, birds-eye-view images of a region, showing where flooding is likely to occur given the strength of an oncoming storm. 

As a test case, the team applied the method to Houston and generated satellite images depicting what certain locations around the city would look like after a storm comparable to Hurricane Harvey, which hit the region in 2017. The team compared these generated images with actual satellite images taken of the same regions after Harvey hit. They also compared AI-generated images that did not include a physics-based flood model. 

The team’s physics-reinforced method generated satellite images of future flooding that were more realistic and accurate. The AI-only method, in contrast, generated images of flooding in places where flooding is not physically possible. 

The team’s method is a proof-of-concept, meant to demonstrate a case in which generative AI models can generate realistic, trustworthy content when paired with a physics-based model. In order to apply the method to other regions to depict flooding from future storms, it will need to be trained on many more satellite images to learn how flooding would look in other regions.

“The idea is: One day, we could use this before a hurricane, where it provides an additional visualization layer for the public,” says Björn Lütjens, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences, who led the research while he was a doctoral student in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “One of the biggest challenges is encouraging people to evacuate when they are at risk. Maybe this could be another visualization to help increase that readiness.”

To illustrate the potential of the new method, which they have dubbed the “Earth Intelligence Engine,” the team has made it available as an online resource for others to try.

The researchers report their results today in the journal IEEE Transactions on Geoscience and Remote Sensing. The study’s MIT co-authors include Brandon Leschchinskiy; Aruna Sankaranarayanan; and Dava Newman, professor of AeroAstro and director of the MIT Media Lab; along with collaborators from multiple institutions.

Generative adversarial images

The new study is an extension of the team’s efforts to apply generative AI tools to visualize future climate scenarios. 

“Providing a hyper-local perspective of climate seems to be the most effective way to communicate our scientific results,” says Newman, the study’s senior author. “People relate to their own zip code, their local environment where their family and friends live. Providing local climate simulations becomes intuitive, personal, and relatable.”

For this study, the authors use a conditional generative adversarial network, or GAN, a type of machine learning method that can generate realistic images using two competing, or “adversarial,” neural networks. The first “generator” network is trained on pairs of real data, such as satellite images before and after a hurricane. The second “discriminator” network is then trained to distinguish between the real satellite imagery and the one synthesized by the first network.

Each network automatically improves its performance based on feedback from the other network. The idea, then, is that such an adversarial push and pull should ultimately produce synthetic images that are indistinguishable from the real thing. Nevertheless, GANs can still produce “hallucinations,” or factually incorrect features in an otherwise realistic image that shouldn’t be there. 

“Hallucinations can mislead viewers,” says Lütjens, who began to wonder whether such hallucinations could be avoided, such that generative AI tools can be trusted to help inform people, particularly in risk-sensitive scenarios. “We were thinking: How can we use these generative AI models in a climate-impact setting, where having trusted data sources is so important?” 

Flood hallucinations

In their new work, the researchers considered a risk-sensitive scenario in which generative AI is tasked with creating satellite images of future flooding that could be trustworthy enough to inform decisions of how to prepare and potentially evacuate people out of harm’s way.

Typically, policymakers can get an idea of where flooding might occur based on visualizations in the form of color-coded maps. These maps are the final product of a pipeline of physical models that usually begins with a hurricane track model, which then feeds into a wind model that simulates the pattern and strength of winds over a local region. This is combined with a flood or storm surge model that forecasts how wind might push any nearby body of water onto land. A hydraulic model then maps out where flooding will occur based on the local flood infrastructure and generates a visual, color-coded map of flood elevations over a particular region. 

“The question is: Can visualizations of satellite imagery add another level to this, that is a bit more tangible and emotionally engaging than a color-coded map of reds, yellows, and blues, while still being trustworthy?” Lütjens says. 

The team first tested how generative AI alone would produce satellite images of future flooding. They trained a GAN on actual satellite images taken by satellites as they passed over Houston before and after Hurricane Harvey. When they tasked the generator to produce new flood images of the same regions, they found that the images resembled typical satellite imagery, but a closer look revealed hallucinations in some images, in the form of floods where flooding should not be possible (for instance, in locations at higher elevation). 

To reduce hallucinations and increase the trustworthiness of the AI-generated images, the team paired the GAN with a physics-based flood model that incorporates real, physical parameters and phenomena, such as an approaching hurricane’s trajectory, storm surge, and flood patterns. With this physics-reinforced method, the team generated satellite images around Houston that depict the same flood extent, pixel by pixel, as forecasted by the flood model.

“We show a tangible way to combine machine learning with physics for a use case that’s risk-sensitive, which requires us to analyze the complexity of Earth’s systems and project future actions and possible scenarios to keep people out of harm’s way,” Newman says. “We can’t wait to get our generative AI tools into the hands of decision-makers at the local community level, which could make a significant difference and perhaps save lives.” 

The research was supported, in part, by the MIT Portugal Program, the DAF-MIT Artificial Intelligence Accelerator, NASA, and Google Cloud.

###

Written by Jennifer Chu, MIT News


No comments:

Post a Comment