Thursday, April 25, 2019



APRIL 17, 2019
Artificial intelligence speeds efforts to develop clean, virtually limitless fusion energy

by John Greenwald, Princeton Plasma Physics Laboratory
Depiction of fusion research on a doughnut-shaped tokamak enhanced 
by artificial intelligence. Credit: Eliot Feibush/PPPL and 
Julian Kates-Harbeck/Harvard University

Artificial intelligence (AI), a branch of computer science that is transforming scientific inquiry and industry, could now speed the development of safe, clean and virtually limitless fusion energy for generating electricity. A major step in this direction is under way at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) and Princeton University, where a team of scientists working with a Harvard graduate student is for the first time applying deep learning—a powerful new version of the machine learning form of AI—to forecast sudden disruptions that can halt fusion reactions and damage the doughnut-shaped tokamaks that house the reactions.

Promising new chapter in fusion research

"This research opens a promising new chapter in the effort to bring unlimited energy to Earth," Steve Cowley, director of PPPL, said of the findings, which are reported in the current issue of Naturemagazine. "Artificial intelligence is exploding across the sciences and now it's beginning to contribute to the worldwide quest for fusion power."

Fusion, which drives the sun and stars, is the fusing of light elements in the form of plasma—the hot, charged state of matter composed of free electrons and atomic nuclei—that generates energy. Scientists are seeking to replicate fusion on Earth for an abundant supply of power for the production of electricity.

Crucial to demonstrating the ability of deep learning to forecast disruptions—the sudden loss of confinement of plasma particles and energy—has been access to huge databases provided by two major fusion facilities: the DIII-D National Fusion Facility that General Atomics operates for the DOE in California, the largest facility in the United States, and the Joint European Torus (JET) in the United Kingdom, the largest facility in the world, which is managed by EUROfusion, the European Consortium for the Development of Fusion Energy. Support from scientists at JET and DIII-D has been essential for this work.

The vast databases have enabled reliable predictions of disruptions on tokamaks other than those on which the system was trained—in this case from the smaller DIII-D to the larger JET. The achievement bodes well for the prediction of disruptions on ITER, a far larger and more powerful tokamak that will have to apply capabilities learned on today's fusion facilities.

The deep learning code, called the Fusion Recurrent Neural Network (FRNN), also opens possible pathways for controlling as well as predicting disruptions.

Most intriguing area of scientific growth

"Artificial intelligence is the most intriguing area of scientific growth right now, and to marry it to fusion science is very exciting," said Bill Tang, a principal research physicist at PPPL, coauthor of the paper and lecturer with the rank and title of professor in the Princeton University Department of Astrophysical Sciences who supervises the AI project. "We've accelerated the ability to predict with high accuracy the most dangerous challenge to clean fusion energy."


Unlike traditional software, which carries out prescribed instructions, deep learning learns from its mistakes. Accomplishing this seeming magic are neural networks, layers of interconnected nodes—mathematical algorithms—that are "parameterized," or weighted by the program to shape the desired output. For any given input the nodes seek to produce a specified output, such as correct identification of a face or accurate forecasts of a disruption. Training kicks in when a node fails to achieve this task: the weights automatically adjust themselves for fresh data until the correct output is obtained.

A key feature of deep learning is its ability to capture high-dimensional rather than one-dimensional data. For example, while non-deep learning software might consider the temperature of a plasma at a single point in time, the FRNN considers profiles of the temperature developing in time and space. "The ability of deep learning methods to learn from such complex data make them an ideal candidate for the task of disruption prediction," said collaborator Julian Kates-Harbeck, a physics graduate student at Harvard University and a DOE-Office of Science Computational Science Graduate Fellow who was lead author of the Nature paper and chief architect of the code.

Training and running neural networks relies on graphics processing units (GPUs), computer chips first designed to render 3-D images. Such chips are ideally suited for running deep learning applications and are widely used by companies to produce AI capabilities such as understanding spoken language and observing road conditions by self-driving cars.

Kates-Harbeck trained the FRNN code on more than two terabytes (1012) of data collected from JET and DIII-D. After running the software on Princeton University's Tiger cluster of modern GPUs, the team placed it on Titan, a supercomputer at the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility, and other high-performance machines.

A demanding task

Distributing the network across many computers was a demanding task. "Training deep neural networks is a computationally intensive problem that requires the engagement of high-performance computing clusters," said Alexey Svyatkovskiy, a coauthor of the Nature paper who helped convert the algorithms into a production code and now is at Microsoft. "We put a copy of our entire neural network across many processors to achieve highly efficient parallel processing," he said.

The software further demonstrated its ability to predict true disruptions within the 30-millisecond time frame that ITER will require, while reducing the number of false alarms. The code now is closing in on the ITER requirement of 95 percent correct predictions with fewer than 3 percent false alarms. While the researchers say that only live experimental operation can demonstrate the merits of any predictive method, their paper notes that the large archival databases used in the predictions, "cover a wide range of operational scenarios and thus provide significant evidence as to the relative strengths of the methods considered in this paper."

From prediction to control

The next step will be to move from prediction to the control of disruptions. "Rather than predicting disruptions at the last moment and then mitigating them, we would ideally use future deep learning models to gently steer the plasma away from regions of instability with the goal of avoiding most disruptions in the first place," Kates-Harbeck said. Highlighting this next step is Michael Zarnstorff, who recently moved from deputy director for research at PPPL to chief science officer for the laboratory. "Control will be essential for post-ITER tokamaks—in which disruption avoidance will be an essential requirement," Zarnstorff noted.

Progressing from AI-enabled accurate predictions to realistic plasma control will require more than one discipline. "We will combine deep learning with basic, first-principle physics on high-performance computers to zero in on realistic control mechanisms in burning plasmas," said Tang. "By control, one means knowing which 'knobs to turn' on a tokamak to change conditions to prevent disruptions. That's in our sights and it's where we are heading."


APRIL 25, 2019 REPORT

China's efforts to reduce air pollution in major cities found to increase pollution in nearby areas

by Bob Yirka , Phys.org
Credit: CC0 Public Domain

A team of researchers with members affiliated with institutions in China, the Netherlands, Czech Republic, the U.S. and Austria has found that efforts by the Chinese government to reduce air pollution in its major cities have resulted in higher air pollution levels in nearby areas. The group has published a paper describing their findings in the journal Science Advances.


Over the past several decades, China has become a major manufacturing powerhouse, but in doing so, has put the health of its urban citizens at risk due to severe air pollution. The pollution mainly comes from factory smokestacks. The problem was highlighted back in 2008 as viewers of the Beijing Olympics saw dense clouds of pollution blanketing major parts of the city. Since then, the Chinese government has instituted policies and rules governing the amount of pollutants a company can emit. The results have been promising—pollution levels have diminished. But it appears the problem has been shifted rather than solved, the researchers on this new effort report. They took and tested air samples from a large number of sites just outside of the big metropolitan areas and found huge increases in air pollution levels.

The researchers note that many of the rules surrounding pollution limits are localized in China. This means that companies that find themselves emitting over the limit can simply move to a nearby area that falls under a different, less strict, jurisdiction. They note also that quite often those people in charge of making rules about air pollution outside of the metropolitan areas are much laxer about pollutants because they hope to attract companies that will employ people who live there.

In testing the air in areas some distance from cities such as Beijing, the researchers found that, on average, particulate matter was 1.6 times higher than the amount of reductions seen in the cities, which shows that the country is actually producing more of it than ever. They also found that the lax rules outside of metropolitan areas led to overall emission levels that were 3.6 times higher than they were before the new urban rules were put in place. And they found that overall water consumption was 2.9 times higher as well. They also discovered that occasionally the winds shifted, pushing the pollution from the new manufacturing areas back to the cities, covering them once again with dense clouds of pollutants.

Japan creates first artificial crater on asteroid

Japan's Hayabusa2 mission aims to shed light on how the solar system evolved
Japan's Hayabusa2 mission aims to shed light on how the solar system evolved
Japanese scientists have succeeded in creating what they called the first-ever artificial crater on an asteroid, a step towards shedding light on how the solar system evolved, the country's space agency said Tcomes after the Hayabusa2 probe fired an explosive device at the Ryugu asteroid early this month to blast a  in the surface and scoop up material, aiming to reveal more about the  on Earth.
Yuichi Tsuda, Hayabusa2 project manager at the Japanese  (JAXA), told reporters they confirmed the crater from images captured by the probe located 1,700 metres (5,500 feet) from the asteroid's surface.
"Creating an artificial crater with an impactor and observing it in detail afterwards is a world-first attempt," Tsuda said.
"This is a big success."
NASA's Deep Impact probe succeeded in creating an artificial crater on a comet in 2005, but only for observation purposes.
Masahiko Arakawa, a Kobe University professor involved in the project, said it was "the best day of his life".
"We can see such a big hole a lot more clearly than expected," he said, adding the images showed a crater 10 metres in diameter.
JAXA scientists had previously predicted that the crater could be as large as 10 metres in diameter if the surface was sandy, or three metres if rocky.
"The surface is filled with boulders but yet we created a crater this big. This could mean there's a scientific mechanism we don't know or something special about Ryugu's materials," the professor said.
The aim of blasting the crater on Ryugu is to throw up "fresh" material from under the asteroid's surface that could shed light on the early stages of the .
The asteroid is thought to contain relatively large amounts of organic matter and water from some 4.6 billion years ago when the solar system was born.
In February, Hayabusa2 touched down briefly on Ryugu and fired a bullet into the surface to puff up dust for collection, before blasting back to its holding position.
The mission, with a price tag of around 30 billion yen ($270 million), was launched in December 2014 and is scheduled to return to Earth with its samples in 2020.
Photos of Ryugu—which means "Dragon Palace" in Japanese and refers to a castle at the bottom of the ocean in an ancient Japanese tale—show the asteroid has a rough  full of boulders.

© 2019 AFP


APRIL 25, 2019
Early melting of winter snowfall advances the Arctic springtime

by University of Edinburgh
Spring plants in parts of the Arctic tundra are arriving earlier than in previous decades, owing to early melt of winter snows and rising temperatures, according to a study led by University of Edinburgh scientists. Credit: Sandra Angers-Blondin

The early arrival of spring in parts of the Arctic is driven by winter snow melting sooner than in previous decades and by rising temperatures, research suggests.


The findings, from a study of plants at coastal sites around the Arctic tundra, help scientists understand how the region is responding to a changing climate and how it may continue to adapt.

Researchers studied the timing of activity in seasonal vegetation, which acts as a barometer for the environment. Changes in the arrival of leaves and flowers—which cover much of the region—can reflect or influence shifts in the climate.

A team from the University of Edinburgh, and universities in Canada, the US, Denmark and Germany, gathered data on the greening and flowering of 14 plant species at four sites in Alaska, Canada and Greenland.

They sought to better understand which factors have the greatest influence on the timing of spring plants in the tundra—temperatures, snow melt or sea ice melt.

Variation in the timing of leaves and flowers appearing on plants between the sites was found to be linked to the timing of local snow melt and, to a lesser extent, temperatures.

Across the tundra, leaves and flowers were found to emerge as much as 20 days sooner compared with two decades ago. Within the same timeframe, spring temperatures warmed by 1 degree Celsius each decade on average, while loss of sea ice occurred around 20 days sooner across the different regions.

Spring plants in parts of the Arctic tundra are arriving earlier 
than in previous decades, owing to early melt of winter snows 
and rising temperatures, according to a study led by
 University of Edinburgh scientists. Credit: Anne D. Bjorkman
Overall snow melt, which advanced by about 10 days over two decades, had the greatest influence on the timing of spring.

The study, published in Global Change Biology, was funded by the UK Natural Environment Research Council.

Dr. Isla Myers-Smith, of the University of Edinburgh's School of GeoSciences, who took part in the study, said: "In the extreme climate of the Arctic tundra, where summers are short, the melting of winter snows as well as warming temperatures are key drivers of the timing of spring. This will help us to understand how Arctic ecosystems are responding as the climate warms."


Explore further Air temperatures in the Arctic are driving system change
More information: Jakob J. Assmann et al, Local snow melt and temperature—but not regional sea ice—explain variation in spring phenology in coastal Arctic tundra, Global Change Biology (2019). DOI: 10.1111/gcb.14639
Journal information: Global Change Biology


Provided by University of Edinburgh