Monday, December 20, 2021

Toward fusion energy, team models plasma turbulence on the nation's fastest supercomputer

Toward fusion energy, team models plasma turbulence on the nation’s fastest supercomputer
A visualization of deuterium-tritium density fluctuations in a tokamak driven by turbulence.
 Areas of red are representative of high density and areas of blue are representative of
 low density. Credit: Emily Belli, General Atomics

A team modeled plasma turbulence on the nation's fastest supercomputer to better understand plasma behavior

The same process that fuels stars could one day be used to generate massive amounts of power here on Earth. Nuclear fusion—in which  fuse to form heavier nuclei and release energy in the process—promises to be a long-term, sustainable, and safe form of energy. But scientists are still trying to fine-tune the process of creating net fusion power.

A team led by computational physicist Emily Belli of General Atomics has used the 200-petaflop Summit supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), a US Department of Energy (DOE) Office of Science user facility at Oak Ridge National Laboratory (ORNL), to simulate energy loss in fusion plasmas. The team used Summit to model , the unsteady movement of , in a  device called a tokamak. The team's simulations will help inform the design of next-generation tokamaks like ITER with optimum confinement properties. ITER is the world's largest tokamak, which is being built in the south of France.

"Turbulence is the main mechanism by which particle losses happen in the plasma," Belli said. "If you want to generate a plasma with really good confinement properties and with good fusion power, you have to minimize the turbulence. Turbulence is what moves the particles and energy out of the hot core where the fusion happens."

The , which were published in Physics of Plasmas earlier this year, provided estimates for the particle and heat losses to be expected in future tokamaks and reactors. The results will help scientists and engineers understand how to achieve the best operating scenarios in real-life tokamaks.

A balancing act

In the fusion that occurs in stars like our sun, two  (i.e., positively charged proton particles) fuse to form helium ions. However, in experiments on Earth, scientists must use hydrogen isotopes to create fusion. Each hydrogen isotope has one positively charged proton particle, but different isotopes carry different numbers of neutrons. These neutral particles don't have a charge, but they do add mass to the atom.

Traditionally, physicists have used pure deuterium—a hydrogen isotope with one neutron—to generate fusion. Deuterium is readily available and easier to handle than tritium, a hydrogen isotope with two neutrons. However, physicists have known for decades that using a mixture of 50 percent deuterium and 50 percent tritium yields the highest fusion output at the lowest temperature.

"Even though they've known this mixture gives the greatest amount of fusion output, almost all experiments for the last few decades have only used pure deuterium," Belli said. "Experiments using this mixture have only been done a few times over the past few decades. The last time it was done was more than 20 years ago."

To ensure the plasma is confined in a reactor and that energy is not lost, both the deuterium and tritium in the mixture must have equal particle fluxes, an indicator of density. Scientists aim to maintain a 50-50 density throughout the tokamak core.

"You want the deuterium and the tritium to stay in the hot core to maximize the fusion power," Belli said.

Supercomputing powers fusion simulations

To study the phenomenon, the team competed for and won computing allocations on Summit through two allocation programs at the OLCF. These were the Advanced Scientific Computing Research Leadership Computing Challenge, or ALCC, and the Innovative and Novel Computational Impact on Theory and Experiment, or INCITE, programs.

The researchers modeled plasma turbulence on Summit using the CGYRO code codeveloped by Jeff Candy, director of theory and computational sciences at General Atomics and co-principal investigator on the project. CGYRO was developed in 2015 from the GYRO legacy computational plasma physics code. The developers designed CGYRO to be compatible with the OLCF's Summit system, which debuted in 2018.

"We realized in 2015 that we wanted to upgrade our models to handle these self-sustaining plasma regimes better and to handle the multiple scales that arise when you have different types of ions and electrons, like in these deuterium-tritium plasmas," Belli said. "It became clear that if we wanted to update our models and have them be highly optimized for next-generation architectures, then we should start from the ground up and completely rewrite the code. So that's what we did."

With Summit, the team could include both isotopes—deuterium and tritium—in their simulations.

"Up until now, almost all simulations have only included one of these isotopes—either deuterium or tritium," Belli said. "The power of Summit enabled us to include both as two separate species, model the full dimensions of the problem, and resolve it at different time and spatial scales."

Results for the real world

Experiments using deuterium-tritium fuel mixtures are now being carried out for the first time since 1997 at the Joint European Torus (JET), a fusion research facility at the Culham Centre for Fusion Energy in Oxfordshire, UK. The experiments at the JET facility will help scientists and engineers develop fuel control practices for maintaining a 50-50 ratio of deuterium to tritium. Belli said it will likely be the last time deuterium-tritium experiments are run until ITER, the world's largest tokamak, is built.

"The experimental team is getting results as we speak, and in the next few months, the data will be analyzed," Belli said.

The results will give scientists a better idea of the behavior of deuterium-tritium fuel for a practical fusion reactor.

"This fuel gives you the highest fusion output at the lowest temperature, so you don't have to heat it quite as hot to get an enormous amount of  power out of it," Belli said.

"Because it's been so long since these kinds of experiments have been done, our simulations are important to predict the behavior of this fuel mixture to plan for ITER. Summit is giving us the power to do just that."Isotope movement holds key to the power of fusion reactions

More information: E. A. Belli et al, Asymmetry between deuterium and tritium turbulent particle flows, Physics of Plasmas (2021). DOI: 10.1063/5.0048620

Journal information: Physics of Plasmas 

Provided by Oak Ridge National Laboratory 

Unraveling a puzzle to speed the development of fusion energy

PPPL unravels a puzzle to speed the development of fusion energy
Yichen Fu, center, lead author of the path-setting paper with co-authors Laura Xing Zhang 
and Hong Qin. Credit: Photos of Fu and Qin by Elle Starkman/Office of Communications; 
collage by Kiran Sudarsanan.

Researchers at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory have developed an effective computational method to simulate the crazy-quilt movement of free electrons during experimental efforts to harness on Earth the fusion power that drives the sun and stars. The method cracks a complex equation that can enable improved control of the random and fast-moving moving electrons in the fuel for fusion energy.

Fusion produces enormous energy by combining light elements in the form of —the hot, charged gas composed of free electrons and atomic nuclei, or ions, that makes up 99 percent of the visible universe. Scientists around the world are seeking to reproduce the fusion process to provide a safe, clean and abundant power to generate electricity.

Solving the equation

A key hurdle for researchers developing fusion on doughnut-shaped devices called tokamaks, which confine the plasma in magnetic fields, has been solving the equation that describes the motion of free-wheeling electrons as they collide and bounce around. Standard methods for simulating this motion, technically called pitch-angle scattering, have proven unsuccessful due to the complexity of the equation.

A successful set of computational rules, or algorithm, would solve the equation while conserving the energy of the speeding particles. "Solving the stochastic differential equation gives the probability of every path the scattered electrons can take," said Yichen Fu, a graduate student in the Princeton Program in Plasma Physics at PPPL and lead author of a paper in the Journal of Computational Physics that proposes a solution. Such equations yield a pattern that can be analyzed statistically but not determined precisely.

The accurate solution describes the trajectories of the electrons being scattered. "However, the trajectories are probabilistic and we don't know exactly where the electrons would go because there are many possible paths," Fu said. "But by solving the trajectories we can know the probability of electrons choosing every path, and knowing that enables more accurate simulations that can lead to better control of the plasma."

A major benefit of this knowledge is improved guidance for fusion researchers who pump electric current into tokamak plasmas to create the  that confines the superhot gas. Another benefit is better understanding of the pitch-angle scattering on energetic runaway electrons that pose danger to the fusion devices.

Rigorous proof

The finding provides a rigorous mathematical proof of the first working algorithm for solving the complex equation. "This gives experimentalists a better theoretical description of what's going on to help them design their experiments," said Hong Qin, a principal research physicist, advisor to Fu and a coauthor of the paper. "Previously, there was no working algorithm for this equation, and physicists got around this difficulty by changing the equation."

The reported study represents the research activity in algorithms and applied math of the recently launched Computational Sciences Department (CSD) at PPPL and expands an earlier paper coauthored by Fu, Qin and graduate student Laura Xin Zhang, a coauthor of this paper. While that work created a novel energy-conserving algorithm for tracking fast particles, the method did not incorporate magnetic fields and the mathematical accuracy was not rigorously proven.

The CSD, founded this year as part of the Lab's expansion into a multi-purpose research center, supports the critical  sciences mission of PPPL and serves as the home for computationally intensive discoveries. "This technical advance displays the role of the CSD," Qin said. "One of its goals is to develop algorithms that lead to improved fusion simulations."Advancing fusion energy through improved understanding of fast plasma particles

Nuclear, pumped storage, and coal power plants are more likely to have multiple owners

U.S. electric generating capacity and ownership type
Source: U.S. Energy Information Administration, Annual Electric Generator Inventory

The U.S. Energy Information Administration (EIA) collects data on whether an electric generator is owned by one company or jointly owned by several companies, and for those jointly owned, each owner’s share of ownership. In 2019, about 14% of the 1,099 gigawatts (GW) of total operational U.S. electricity generating capacity was jointly owned. Nuclear capacity had the highest percentage of joint ownership at 37%, followed by pumped-storage hydropower at 34% and coal at 29%. These types of power plants tend to be large-scale facilities that are expensive to build, and the technologies come with higher regulatory risks, making joint ownership more attractive by reducing plant ownership risks for each owner.

Joint ownership reduces risk by sharing construction and operation costs across multiple entities. A joint venture spreads the cost and risk across entities while allowing them to benefit from capacity portfolio diversity and the economies of scale that large generation assets provide. Complementary expertise from different entities can also potentially increase development and operational efficiency.

In 2019, 58 nuclear power plants with a total of 96 nuclear reactors were operating in the United States. The largest U.S. nuclear power plant, Palo Verde in Arizona, has seven joint owners. These owners include utilities from Arizona, California, Texas, and New Mexico. The ownership percentages in Palo Verde range from 29% to 6%.

The only nuclear reactors under construction in the United States are Units 3 and 4 at Vogtle nuclear power plant in Georgia. Once completed, Vogtle will be the largest nuclear power plant in the country. Vogtle is jointly owned by four entities: Georgia Power (46% ownership), Oglethorpe Power (30%), the Municipal Electric Authority of Georgia (23%), and Dalton Utilities (2%).

Pumped-storage hydropower and conventional hydropower are technologies that are similar to one another, but they have very different ownership profiles. Only 2% of conventional hydropower capacity was jointly owned in 2019, compared with 34% for pumped storage. Almost 64% of U.S. conventional hydroelectric capacity is owned by a single federal, state, or municipal government. Another 19% is owned by a single electric utility.

Most conventional hydroelectric plants were built between 1950 and 1980, when hydropower project funding mainly came from the federal government. Conversely, most pumped storage became operational in the 1970s and 1980s when the industry and markets supported investors constructing large capital projects. The two largest pumped-storage plants, Bath County (3.0 GW) and Ludington (2.3 GW), which together represent 22% of total U.S. pumped-storage capacity, are jointly owned.

Principal contributor: Ray Chen

Nuclear energy to still be main source of electricity in Bulgaria in 2030 – GlobalData

Photo: Kozloduy NPP

Published
May 13, 2021
Country
Bulgaria
Author
Vladimir Spasić

Nuclear power will remain the dominant source of power generation in Bulgaria by 2030, despite the government’s plans to shift toward renewable power.

The Bulgarian government is collaborating with the United States and Russia in the development of new nuclear power plants. It is preparing the construction of a seventh unit at the Kozloduy nuclear power plant and the deployment of NuScale’s small modular reactor (SMR) technology.


Nuclear power generation share in total power generation was 44% in 2020, and it is expected to remain above 40% until 2030

“Nuclear power generation was 15.9 TWh in 2020, making its share 44% in total power generation in the country and this is expected to remain above 40% until 2030,” said Pavan Vyakaranam, Practice Head at GlobalData.

Electricity demand in Bulgaria stood at 30.9 TWh in 2020.

The share of renewables was 22.1% in 2018

According to the draft Sustainable Energy Development Strategy of Bulgaria until 2030 with a projection until 2050, electricity generation from renewable sources is seen growing to 30.33% from 22.1%, registered in 2018.



Nuclear power will remain the dominant source for power generation in the country at least until 2030, estimated at 14.1 TWh per year, despite the government’s plans to replace it with renewable power capacity, according to analytics company GlobalData.

Vyakaranam said Bulgaria’s electricity market is currently in transition, with the government slowly decreasing its coal power capacity in order to replace it with renewable power.
Country plans investments with Russia, and US

Bulgaria has only one nuclear power station, Kozloduy nuclear power plant (NPP), with two units in operation after the decommission of units 1 and 2 in 2002 and units 3 and 4 in 2006.

In January 2021, the Bulgarian government approved plans for the construction of a seventh unit, using Russian-supplied equipment purchased for the Belene project. However, according to GlobalData, the schedule is still uncertain due to financial issues.

Bulgaria has also taken multiple steps toward the development of nuclear power in recent times including joining the Nuclear Energy Agency (NEA) in January 2021 while Kozloduy NPP also signed a memorandum of understanding (MoU) with US-based NuScale Power for the deployment of its small modular reactor (SMR) technology.

GEORGIA, USA

Nuclear plant price doubles to $28.5B as other owners balk

Rendering Vogtle Units 3 and 4, courtesy of Georgia Power

By JEFF AMY Associated Press

ATLANTA (AP) — The cost of two nuclear reactors being built in Georgia is now $28.5 billion, more than twice the original price tag, and the other owners of Plant Vogtle argue Georgia Power Co. has triggered an agreement requiring Georgia Power to shoulder a larger share of the financial burden.

Atlanta-based Southern Co. announced in its quarterly earnings statement Thursday that Georgia Power’s share of the third and fourth nuclear reactors at Plant Vogtle has risen to a total of $12.7 billion, an increase of $264 million. Along with what cooperatives and municipal utilities project, the total cost of Vogtle has now more than doubled the original projection of $14 billion.

Opponents have long warned that overruns would be sky-high. Liz Coyle, executive director of consumer advocacy group Georgia Watch, said the price tag is “outrageous” but predictable.

“We said you can’t build it for what you’re saying you can,” she said of Georgia Watch’s opposition to the project when the Georgia Public Service Commission originally authorized the new reactors.

Total costs are actually higher than $28.5 billion, because that doesn’t count the $3.68 billion that contractor Westinghouse paid back to owners after going bankrupt. When approved in 2012, the first electricity was supposed to be generated in 2016.

The company and regulators insist the first new U.S. reactors in decades are the best source of clean and reliable energy for Georgia. Opponents say other options would be cheaper and better, including natural gas or solar generation.

Southern Co. also disclosed Thursday that the other owners of Vogtle are saying Georgia Power has tripped an agreement to pay a larger share of the ongoing overruns, a cost the company estimates at up to $350 million. Southern Co. said it disagrees that Georgia Power has crossed the cost threshold but has signed an agreement to extend talks with the other owners on the issue.

Georgia Power owns 45.7% of the new reactors, while cooperative-owned Oglethorpe Power Corp. owns 30%. The Municipal Electric Authority of Georgia owns 22.7% and the city of Dalton’s municipal utility owns 1.6%. Florida’s Jacksonville Electric Authority is obligated to cover some of MEAG’s costs. Some cooperatives and municipal utilities in Alabama and northwest Florida have agreed to buy power as well.

The higher costs stem from more construction delays. Georgia Power announced last month that it doesn’t expect Unit 3 to start generating electricity until the third quarter of 2022. It was the third delay announced since May. Unit 4 is now projected to enter service sometime between April and June of 2023.

The company says it is redoing substandard construction work and contractors aren’t meeting deadlines. Experts hired by the Georgia Public Service Commission to monitor construction have long said Southern Co. has set an unrealistic schedule. In August, the U.S. Nuclear Regulatory Commission found two sets of electrical cables meant to provide redundancy in Unit 3 weren’t properly separated. Earlier, Georgia Power had to repair a leak in Unit 3′s spent fuel pool.

Georgia Power shareholders have been paying the cost of recent overruns, but the company could ask regulators to require customers to pay some or all of those bills.

Other owners almost walked away in 2018, only agreeing to keep going after Georgia Power agreed to protect them from some additional overruns.

The Thursday stock filing indicates the other owners now believe the cost threshold has been reached requiring Georgia Power to pay a larger share of the project.

Southern Co. told investors that Georgia Power disagrees that it has reached the threshold, but said the owners had signed an agreement Oct. 29 to “clarify” how the 2018 deal will work.

Beyond a certain point, Georgia Power is required to pay an additional $80 million of the next $800 million. After that tier, Georgia Power is required to pay an additional $100 million of the next $500 million. Above that price, other owners can sell parts of their ownership shares back to Georgia Power.

The $350 million price tag suggests the other owners argue overruns have reached the third tier.

It wasn’t immediately clear Thursday exactly what is being negotiated or when an agreement might be reached. Georgia Power spokesperson Jeffrey Wilson acknowledged the disagreement, saying “all parties are working constructively” to resolve differences. He answered no questions. Oglethorpe spokesperson Terri Statham declined comment, saying Oglethorpe would provide an update in a Nov. 12 investor filing. MEAG didn’t respond to emails and calls seeking comment.

The Georgia Public Service Commission on Tuesday approved a $224 million rate increase to pay for $2.1 billion in construction costs on Unit 3. That’s a 3% rate increase for residential customers, or $3.78 a month on a bill of $122.73. It will take effect after Unit 3 enters commercial operation.

Georgia Power’s 2.6 million customers have already paid more than $3.5 billion in Vogtle borrowing costs. Customers of Oglethorpe-served cooperatives have already paid almost $400 million, according to financial statements.

Vogtle nuclear expansion facing yet another delay, Georgia Power reports

Georgia Power has pushed back the in-service dates for its troubled Vogtle Units 3 and 4 expansion project—the first new nuclear power plant construction to be completed in years—by maybe another quarter or more.

The utility originally had hoped to make Unit 3 commercial operational by this year, but numerous cost overruns and construction problems have caused more delays. Now, Southern Co.-owned Georgia Power says Unit 3 may not start operations until July and maybe as late as September 2022, while Unit 4 will not come online until the second quarter of 2023.

“The primary drivers of the change in schedule for Unit 3 include continued identification of additional remediation work, construction productivity related to completion of remaining electrical installations and remediation work, and the subsequent resulting pace of system turnovers,” reads the company press release. “The primary drivers of the change in schedule for Unit 4 include productivity challenges and some craft and support resources being diverted temporarily to support construction efforts on Unit 3.

“The achievability of these projected in-service dates is subject to current and future challenges, including construction productivity, the volume of construction remediation work, the pace of system and area turnovers, and the progression of startup and other testing. Any further delays could result in later in-service dates.”

This summer, Georgia Power reported that both the projected starting dates would be pushed back and costs would rise by billions. Recent estimates put the overall tab of the project at close to $28 billion.

Work on Vogtle units 3 and 4 began in 2015. Two years later, the original contractor Westinghouse filed for Chapter 11 bankruptcy reorganization, so Bechtel was brought in to lead construction efforts to the finish line. Southern Nuclear took oversight duties from Westinghouse.

Both Vogtle units will have Westinghouse AP1000 reactors at the center. Each of the units are designed to generate about 1,000 MW at capacity and together will power close to 500,000 customers.

Unit 3 direct construction is now 99 percent complete, with the entire project at about 93 percent complete. The construction site now has close to 7,000 workers, with about 800 permanent jobs planned once the units begin operating.

Earlier this year, the U.S. Nuclear Regulatory Commission revealed it was launching a special inspection into remediation work on Unit 3 related to the electrical cable raceway system.

If and when they go into service, Vogtle units 3 and 4 would be the first new U.S. nuclear generation reactors since Watts Bar 2 entered operation six years ago.

Georgia Power’s lead partners on the project include the Municipal Electric Authority of Georgia (MEAG), Dalton Power and Oglethorpe Power. Southern Co. is the parent of Georgia Power.

Keeping Diablo Canyon open can help California achieve its climate goals, researchers say
By John Engel -12.2.2021


Extending the life of Diablo Canyon Nuclear Plant could help California achieve its ambitious climate goals, while saving customers billions of dollars and reducing reliance on natural gas, according to a new study.

Researchers at Stanford University's Precourt Institute for Energy and the Massachusetts Institute for Technology Center for Advanced Nuclear Energy Systems found that extending Diablo Canyon's life by 10 years would reduce carbon emissions from California's power sector by more than 10 percent annually from 2017 levels and save ratepayers a total of $2.6 billion. By keeping the plant open until 2045, ratepayers would save $21 billion, they determined.

“The worsening climate crisis requires urgent action to accelerate emission reductions,” said lead author Jacopo Buongiorno, director of the MIT Center for Advanced Nuclear Energy Systems. “An inclusive strategy that utilizes Diablo Canyon, in addition to an aggressive build-out of renewables and other sources of clean generation, would significantly reduce California’s power sector emissions over the course of the next two decades.”

In 2018, the California Public Utilities Commission approved a settlement to shut down Diablo Canyon, which currently provides 8 percent of California's electricity production and 15 percent of its carbon-free electricity.

U.S. Energy Secretary Jennifer Granholm, a proponent of nuclear energy, told Reuters that she would be open to talking with state officials about extending the life of Diablo Canyon because of "… a change underfoot about the opinion that people may have about nuclear."

After 60 Years, Nuclear Power for Spaceflight is Still Tried and True

Six decades after the launch of the first nuclear-powered space mission, Transit IV-A, NASA is embarking on a bold future of human exploration and scientific discovery. This future builds on a proud history of safely launching and operating nuclear-powered missions in space.

“Nuclear power has opened the solar system to exploration, allowing us to observe and understand dark, distant planetary bodies that would otherwise be unreachable. And we’re just getting started,” said Dr. Thomas Zurbuchen, associate administrator for NASA's Science Mission Directorate. “Future nuclear power and propulsion systems will help revolutionize our understanding of the solar system and beyond and play a crucial role in enabling long-term human missions to the Moon and Mars.”

June 29 marks the 60th anniversary of Transit IV-A, the first nuclear powered space mission.
June 29 marks the 60th anniversary of Transit IV-A, the first nuclear powered space mission.
Credits: NASA/Gayle Dibiasio

From Humble Beginnings: Nuclear Power Spawns an Age of Scientific Discovery

On June 29, 1961, the Johns Hopkins University Applied Physics Laboratory launched the Transit IV-A Spacecraft. It was a U.S. Navy navigational satellite with a SNAP-3B radioisotope powered generator producing 2.7 watts of electrical power -- about enough to light an LED bulb. Transit IV-A broke an APL mission-duration record and confirmed the Earth’s equator is elliptical. It also set the stage for ground-breaking missions that have extended humanity’s reach across the solar system.

Since 1961, NASA has flown more than 25 missions carrying a nuclear power system through a successful partnership with the Department of Energy (DOE), which provides the power systems and plutonium-238 fuel.

“The department and our national laboratory partners are honored to play a role in powering NASA’s space exploration activities,” said Tracey Bishop, deputy assistant secretary in DOE’s Office of Nuclear Energy. “Radioisotope Power Systems are a natural extension of our core mission to create technological solutions that meet the complex energy needs of space research, exploration, and innovation.”

There are only two practical ways to provide long-term electrical power in space: the light of the Sun or heat from a nuclear source.

“As missions move farther away from the Sun to dark, dusty, and harsh environments, like Jupiter, Pluto, and Titan, they become impossible or extremely limited without nuclear power,” said Leonard Dudzinski, chief technologist for NASA’s Planetary Science Division and program executive for Radioisotope Power. 

That’s where Radioisotope Power Systems, or RPS, come in. They are a category of power systems that convert heat generated by the decay of plutonium-238 fuel into electricity.

“These systems are reliable and efficient,” said June Zakrajsek, manager for NASA’s Radioisotope Power Systems Program office at Glenn Research Center in Cleveland. “They operate continuously over long-duration space missions regardless of sunlight, temperature, charged particle radiation, or surface conditions like thick clouds or dust. They’ve allowed us to explore from the Sun to Pluto and beyond.”

RPS powered the Apollo Lunar Surface Experiment Package. They’ve sustained Voyager 1 and 2 since 1977, and they kept Cassini-Huygens’ instruments warm as it explored frigid Saturn and its moon Titan.

Today, a Multi-Mission Radioisotope Thermoelectric Generator (MMRTG) powers the Perseverance rover, which is captivating the nation as it searches for signs of ancient life on Mars, and a single RTG is sustaining New Horizons as it ventures on its way out of the solar system 15 years after its launch.

“The RTG was and still is crucial to New Horizons,” said Alan Stern, New Horizons principal investigator from the Southwest Research Institute. “We couldn’t do the mission without it. No other technology exists to power a mission this far away from the Sun, even today.”

Great Things to Come: Science and Human Exploration

Dragonfly, which is set to launch in 2027, is the next mission with plans to use an MMRTG. Part of NASA’s New Frontiers program, Dragonfly is an octocopter designed to explore and collect samples on Saturn’s largest moon, Titan, an ocean world with a dense, hazy atmosphere.

“RPS is really an enabling technology,” said APL’s Zibi Turtle, principal investigator for the upcoming Dragonfly mission. “Early missions like Voyager, Galileo, and Cassini that relied on RPS have completely changed our understanding and given us a geography of the distant solar system…Cassini gave us our first close-up look at the surface of Titan.”

According to Turtle, the MMRTG serves two purposes on Dragonfly: power output to charge the lander’s battery and waste heat to keep its instruments and electronics warm.

“Flight is a very high-power activity. We’ll use a battery for flight and science activities and recharge the battery using the MMRTG,” said Turtle. “The waste heat from the power system is a key aspect of our thermal design. The surface of Titan is very cold, but we can keep the interior of the lander warm and cozy using the heat from the MMRTG.”

As the scientific community continues to benefit from RPS, NASA’s Space Technology Mission Directorate is investing in new technology using reactors and low-enriched uranium fuel to enable a robust human presence on the Moon and eventually human missions to Mars.

Astronauts will need plentiful and continuous power to survive the long lunar nights and explore the dark craters on the Moon’s South Pole. A fission surface power system could provide enough juice to power robust operations. NASA is leading an effort, working with the DOE and industry, to design a fission power system for a future lunar demonstration that will pave the way for base camps on the Moon and Mars.

NASA has also thought about viable ways to reduce the time it takes to travel to Mars, including nuclear propulsion systems.

As NASA advances its bold vision of exploration and scientific discovery in space, it benefits from 60 years of the safe use of nuclear power during spaceflight. Sixty years of enlightenment that all started with a little satellite called Transit IV-A.

-end-

Jan Wittry
NASA's Glenn Research Center

Last Updated: Jun 29, 2021
Editor: Bill Keeter

New Research Could Help Boost the Efficiency of Nuclear Power Plants in the Near Future

Old Nuclear Reactor

New research from Texas A&M University scientists could help in boosting the efficiency of nuclear power plants in the near future. By using a combination of physics-based modeling and advanced simulations, they found the key underlying factors that cause radiation damage to nuclear reactors, which could then provide insight into designing more radiation-tolerant, high-performance materials.

“Reactors need to run at either higher power or use fuels longer to increase their performance. But then, at these settings, the risk of wear and tear also increases,” said Dr. Karim Ahmed, assistant professor in the Department of Nuclear Engineering. “So, there is a pressing need to come up with better reactor designs, and a way to achieve this goal is by optimizing the materials used to build the nuclear reactors.”

The results of the study are published in the journal Frontiers in Materials.

Nuclear Power Plant Construction

A study by Dr. Karim Ahmed and his team could help optimize materials for modern nuclear reactors so that they are safer, more efficient and economical.

According to the Department of Energy, nuclear energy surpasses all other natural resources in power output and accounts for 20% of the United States’ electricity generation. The source of nuclear energy is fission reactions, wherein an isotope of uranium splits into daughter elements after a hit from fast-moving neutrons. These reactions generate enormous heat, so nuclear reactors parts, particularly the pumps and pipes, are made with materials possessing exceptional strength and resistance to corrosion.

However, fission reactions also produce intense radiation that causes a deterioration in the nuclear reactor’s structural materials. At the atomic level, when energetic radiation infiltrates these materials, it can either knock off atoms from their locations, causing point defects, or force atoms to take vacant spots, forming interstitial defects. Both these imperfections disrupt the regular arrangement of atoms within the metal crystal structure. And then, what starts as tiny imperfections grow to form voids and dislocation loops, compromising the material’s mechanical properties over time.

While there is some understanding of the type of defects that occur in these materials upon radiation exposure, Ahmed said it has been arduous to model how radiation, along with other factors, such as the temperature of the reactor and the microstructure of the material, together contribute to the formation defects and their growth.

“The challenge is the computational cost,” he said. “In the past, simulations have been limited to specific materials and for regions spanning a few microns across, but if the domain size is increased to even 10s of microns, the computational load drastically jumps.”

In particular, the researchers said to accommodate larger domain sizes, previous studies have compromised on the number of parameters within the simulation’s differential equations. However, an undesirable consequence of ignoring some parameters over others is an inaccurate description of the radiation damage.

To overcome these limitations, Ahmed and his team designed their simulation with all the parameters, making no assumptions on whether one of them was more pertinent than the other. Also, to perform the now computationally heavy tasks, they used the resources provided by the Texas A&M High Performance Research Computing group.

Upon running the simulation, their analysis revealed that using all parameters in nonlinear combinations yields an accurate description of radiation damage. In particular, in addition to the material’s microstructure, the radiation condition within the reactor, the reactor design, and temperature are also important in predicting the instability in materials due to radiation.

On the other hand, the researchers’ work also sheds light on why specialized nanomaterials are more tolerant to voids and dislocation loops. They found that instabilities are only triggered when the border enclosing clusters of co-oriented atomic crystals, or grain boundary, is above a critical size. So, nanomaterials with their extremely fine grain sizes suppress instabilities, thereby becoming more radiation-tolerant.

“Although ours is a fundamental theoretical and modeling study, we think it will help the nuclear community to optimize materials for different types of nuclear energy applications, especially new materials for reactors that are safer, more efficient, and economical, ” said Ahmed. “This progress will eventually increase our clean, carbon-free energy contribution.”

Reference: “Surface and Size Effects on the Behaviors of Point Defects in Irradiated Crystalline Solids” by Abdurrahman Ozturk, Merve Gencturk and Karim Ahmed, 10 August 2021, Frontiers in Materials.
DOI: 10.3389/fmats.2021.684862

Dr. Abdurrahman Ozturk, a research assistant in the nuclear engineering department, is the lead author of this work. Merve Gencturk, a graduate student in the nuclear engineering department, also contributed to this research

UK
More Nuclear Power Isn’t Needed. So Why Do Governments Keep Hyping It?

David Vetter
Senior Contributor
Sustainability
FORBES


Construction at Hinkley Point C nuclear power station in 2020. 
Why is the U.K. government so keen to © 2020 BLOOMBERG FINANCE LP

It is a truth almost universally acknowledged that, in order to have a chance of limiting global warming, humanity must stop fossil fuels to generate electricity. But how do we go about that?

In addition to using renewable sources of energy, such as wind and solar, the U.K. government is in favor of using nuclear power to hasten decarbonization of the country’s energy supply. Prime Minister Boris Johnson has consistently backed the development of “small and advanced reactors,” while last week the country’s Minister for Energy, Clean Growth and Climate Change, Anne-Marie Trevelyan, stated: “While renewables like wind and solar will become an integral part of where our electricity will come from by 2050, they will always require a stable low-carbon baseload from nuclear.”

This pronouncement, offered as a statement of fact, left some observers scratching their heads: here was a U.K. government minister claiming renewables would always require nuclear power to function. Was this true? And why do politicians like to use the word “baseload,” anyway?

“Baseload is a concept used in traditional power systems,” Kang Li, professor of smart energy systems at the University of Leeds, told me. “Mass electrification of transport, industry and heat is expected to stress the power operation significantly. One of the measures to meet the significantly increased electricity demand, while mitigating the fluctuations of renewable generation, is to develop a new nuclear station.

So, because the availability of wind and sun fluctuates, the government’s reasoning is that, as Britain’s coal and gas turbines are shut down, nuclear power will be required to provide a constant, stable source of electricity.

But many experts, including Steve Holliday, the former CEO of the U.K. National Grid, say that notion is outdated. In a 2015 interview Holliday trashed the concept of baseload, arguing that in a modern, decentralized electricity system, the usefulness of large power stations had been reduced to coping with peaks in demand.

But even for that purpose, Sarah J. Darby, associate professor of the energy program at the University of Oxford’s Environmental Change Institute, told me, nuclear isn’t of much use. “Nuclear stations are particularly unsuited to meeting peak demand: they are so expensive to build that it makes no sense to use them only for short periods of time,” she explained. “Even if it were easy to adjust their output flexibly—which it isn’t—there doesn’t appear to be any business case for nuclear, whether large, small, ‘advanced’ or otherwise.”

In a white paper published in June, a team of researchers at Imperial College London revealed that the quickest and cheapest way to meet Britain’s energy needs by 2035 would be to drastically ramp up the building of wind farms and energy storage, such as batteries. “If solar and/or nuclear become substantially cheaper then one should build more, but there is no reason to build more nuclear just because it is ‘firm’ or ‘baseload,’” Tim Green, co-director of Imperial’s Energy Future Lab told me. “Storage, demand-side response and international interconnection can all be used to manage the variability of wind.”

Another vital issue concerns time. Owing to the well-documented safety and environmental concerns surrounding ionizing radiation, planning and building even a small nuclear reactor takes many years. In 2007, Britain’s large Hinkley Point C nuclear power station was predicted to be up and running by 2017. “Estimated completion date is now 2026,” Darby noted. “And Hinkley C was using established technology. Given the nuclear industry’s record of time delays and overspends, the claim that the ‘latest nuclear technology will be up and running within the next decade’ is unconvincing.”

That’s a problem, given that Britain needs to reduce its emissions 78% by 2035 to stay on track with the Paris Agreement.

Indeed, according to the independent World Nuclear Industry Status Report, nuclear energy “meets no technical or operational need that low-carbon competitors cannot meet better, cheaper and faster.”

So if there isn’t a need for more nuclear power, and it’s too expensive and slow to do the job its proponents are saying it will do, why is the government so keen to back it?

Andy Stirling, professor of science and technology policy at the University of Sussex, is convinced that the pressure to support nuclear power comes from another U.K. commitment: defense. More specifically, the country’s fleet of nuclear submarines.

The nuclear powered submarine HMS Vengeance departs for Devonport prior to re-fit


“The U.S. and France have openly acknowledged this military rationale for new civil nuclear build,” he told me. “U.K. defense literature is also very clear on the same point. Sustaining civil nuclear power despite its high costs, helps channel taxpayer and consumer revenues into a shared infrastructure, without which support, military nuclear activities would become prohibitively expensive on their own.”

This is no conspiracy theory. In 2018, Stirling and his colleague Philip Johnstone published the findings of their research into “interdependencies between civil and military nuclear infrastructures” in countries with nuclear capability. In the U.S., a 2017 report from the Energy Futures Initiative, which includes testimony from former U.S. Energy Secretary Ernest Moniz in 2017, states: “a strong domestic supply chain is needed to provide for nuclear Navy requirements. This supply chain has an inherent and very strong overlap with the commercial nuclear energy sector and has a strong presence in states with commercial nuclear power plants”

In the U.K., bodies including the Nuclear Industry Council, a joint forum between the nuclear industry and the government, have explicitly highlighted the overlap between the need for a civil nuclear sector and the country’s submarine programs. And this week, Rolls-Royce, which builds the propulsion systems for the country’s nuclear submarines, announced it had secured some $292 million in funding to develop small modular reactors of the type touted by the Prime Minister.

In Stirling’s view, these relationships help to explain “the otherwise serious conundrum, as to why official support should continue for civil nuclear new build at a time when the energy case has become so transparently weak.”

Stirling and other experts say the energy case for nuclear is weak because there are better, cheaper and quicker alternatives that are readily available.

“When there is too little wind and solar, zero emissions generators which can flexibly and rapidly increase their output are needed,” said Mark Barrett, professor of energy and environmental systems modelling at University College London. “These can be renewables, such as biogas, or generators using fuels made with renewables such as hydrogen. But unlike nuclear, these can be turned off when wind and solar are adequate.”MORE FROM FORBESHow Archaeology Could Help Deal With A New, Old Enemy: Climate ChangeBy David Vetter

Indeed, Barrett pointed out, renewables are becoming so cheap that energy surpluses won’t necessarily be that big a deal.

“Renewable costs have fallen 60-80% in the last decade with more to come, such that it is lower cost to spill some renewable generation than store it, and predominantly renewable systems are lower cost than nuclear. Renewables can be rapidly built: U.K. wind has increased to 24% of total generation, mostly in just 10 years. And of course renewables do not engender safety and waste problems.”

Sarah Darby agreed, saying “a mix of energy efficiency, storage and more flexible demand shows much more promise for reducing carbon emissions overall and for coping with peaks and troughs in electricity supply.”

“The U.K. market for flexibility services is already delivering effective firm-equivalent capacity on the scale of a large nuclear reactor per year, at costs that are a small fraction of the costs of nuclear power,” Stirling told me. “With costs of flexibility diminishing radically—in batteries, other storage, electric vehicles, responsive demand, hydrogen production—the scope for further future cost savings is massive.”

“There is no foreseeable resource constraint on renewables or smart grids that makes the case for nuclear anywhere near credible,” he added. “That the U.K. Government is finding itself able to sustain such a manifestly flawed case, with so little serious questioning, is a major problem for U.K. democracy.”

In the U.K., both the incumbent Conservative party and the main opposition party, Labour, support the development of new and advanced nuclear power reactors. In an emailed response to questions for the U.K. government’s Department for Business, Energy and Industrial Strategy, a government spokesperson categorically denied any link between the civil nuclear sector and the defense industry:

“The civil nuclear sector is separate from the defence nuclear programme, and any suggestions otherwise are simply untrue. Civil nuclear is not funded by the defence budget, and civil nuclear materials are kept under international safeguards and cannot be diverted to defence programmes.”

The spokesperson went on to reiterate the government’s case for nuclear:

“Nuclear power remains an important and reliable source of clean electricity, for times when the wind does not blow and the sun does not shine. Alongside renewables, it will continue to support the delivery of a low cost energy system, while meeting our ambitious climate change goals, a position shared by the independent Committee on Climate Change.”

I contacted the office of Labour’s shadow secretary of state for business, energy and industrial strategy Edward Miliband for comment, but no response has been forthcoming.

Update 08/06/2021 BST2114: This post has been updated to include a response from BEIS, the U.K. government Department for Business, Energy and Industrial Strategy.

Follow me on Twitter.

David Vetter
My key interests are in decarbonization and the development of circular economies.



The US Army tried portable nuclear power at remote bases 60 years ago – it didn’t go well

Part of a portable nuclear power plant arrives at Camp Century in 1960

Bettmann Archive/Getty Images


July 20, 2021

In a tunnel 40 feet beneath the surface of the Greenland ice sheet, a Geiger counter screamed. It was 1964, the height of the Cold War. U.S. soldiers in the tunnel, 800 miles from the North Pole, were dismantling the Army’s first portable nuclear reactor.

Commanding Officer Joseph Franklin grabbed the radiation detector, ordered his men out and did a quick survey before retreating from the reactor.

He had spent about two minutes exposed to a radiation field he estimated at 2,000 rads per hour, enough to make a person ill. When he came home from Greenland, the Army sent Franklin to the Bethesda Naval Hospital. There, he set off a whole body radiation counter designed to assess victims of nuclear accidents. Franklin was radioactive.

The Army called the reactor portable, even at 330 tons, because it was built from pieces that each fit in a C-130 cargo plane. It was powering Camp Century, one of the military’s most unusual bases.

The Camp Century tunnels started as trenches cut into the ice.
U.S. Army Corps of Engineers

Camp Century was a series of tunnels built into the Greenland ice sheet and used for both military research and scientific projects. The military boasted that the nuclear reactor there, known as the PM-2A, needed just 44 pounds of uranium to replace a million or more gallons of diesel fuel. Heat from the reactor ran lights and equipment and allowed the 200 or so men at the camp as many hot showers as they wanted in that brutally cold environment.

The PM-2A was the third child in a family of eight Army reactors, several of them experiments in portable nuclear power.

A few were misfits. PM-3A, nicknamed Nukey Poo, was installed at the Navy base at Antarctica’s McMurdo Sound. It made a nuclear mess in the Antarctic, with 438 malfunctions in 10 years including a cracked and leaking containment vessel. SL-1, a stationary low-power nuclear reactor in Idaho, blew up during refueling, killing three men. SM-1 still sits 12 miles from the White House at Fort Belvoir, Virginia. It cost US$2 million to build and is expected to cost $68 million to clean up. The only truly mobile reactor, the ML-1, never really worked.
The Army abandoned its truck-mounted portable reactor program in 1965. 
This is the ML-1. U.S. Army Corps of Engineers

Nearly 60 years after the PM-2A was installed and the ML-1 project abandoned, the U.S. military is exploring portable land-based nuclear reactors again.

In May 2021, the Pentagon requested $60 million for Project Pele. Its goal: Design and build, within five years, a small, truck-mounted portable nuclear reactor that could be flown to remote locations and war zones. It would be able to be powered up and down for transport within a few days.

The Navy has a long and mostly successful history of mobile nuclear power. The first two nuclear submarines, the Nautilus and the Skate, visited the North Pole in 1958, just before Camp Century was built. Two other nuclear submarines sank in the 1960s – their reactors sit quietly on the Atlantic Ocean floor along with two plutonium-containing nuclear torpedos. Portable reactors on land pose different challenges – any problems are not under thousands of feet of ocean water.

Those in favor of mobile nuclear power for the battlefield claim it will provide nearly unlimited, low-carbon energy without the need for vulnerable supply convoys. Others argue that the costs and risks outweigh the benefits. There are also concerns about nuclear proliferation if mobile reactors are able to avoid international inspection.
A leaking reactor on the Greenland ice sheet

The PM-2A was built in 18 months. It arrived at Thule Air Force Base in Greenland in July 1960 and was dragged 138 miles across the ice sheet in pieces and then assembled at Camp Century.

When the reactor went critical for the first time in October, the engineers turned it off immediately because the PM-2A leaked neutrons, which can harm people. The Army fashioned lead shields and built walls of 55-gallon drums filled with ice and sawdust trying to protect the operators from radiation.

‘The Big Picture,’ an Army TV show distributed to U.S. stations, dedicated a 1961 episode to Camp Century and the reactor.

The PM-2A ran for two years, making fossil fuel-free power and heat and far more neutrons than was safe.

Those stray neutrons caused trouble. Steel pipes and the reactor vessel grew increasingly radioactive over time, as did traces of sodium in the snow. Cooling water leaking from the reactor contained dozens of radioactive isotopes potentially exposing personnel to radiation and leaving a legacy in the ice.

When the reactor was dismantled for shipping, its metal pipes shed radioactive dust. Bulldozed snow that was once bathed in neutrons from the reactor released radioactive flakes of ice.

Franklin must have ingested some of the radioactive isotopes that the leaking neutrons made. In 2002, he had a cancerous prostate and kidney removed. By 2015, the cancer spread to his lungs and bones. He died of kidney cancer on March 8, 2017, as a retired, revered and decorated major general.

Joseph Franklin (right) with pieces of the decommissioned PM-2A reactor at Thule Air Base. U.S. Army Photograph, from Franklin Family, Dignity Memorial

Camp Century’s radioactive legacy

Camp Century was shut down in 1967. During its eight-year life, scientists had used the base to drill down through the ice sheet and extract an ice core that my colleagues and I are still using today to reveal secrets of the ice sheet’s ancient past. Camp Century, its ice core and climate change are the focus of a book I am now writing.

The PM-2A was found to be highly radioactive and was buried in an Idaho nuclear waste dump. Army “hot waste” dumping records indicate it left radioactive cooling water buried in a sump in the Greenland ice sheet.

When scientists studying Camp Century in 2016 suggested that the warming climate now melting Greenland’s ice could expose the camp and its waste, including lead, fuel oil, PCBs and possibly radiation, by 2100, relations between the U.S, Denmark and Greenland grew tense. Who would be responsible for the cleanup and any environmental damage?

A schematic diagram of Camp Century’s nuclear reactor in the Greenland ice sheet. U.S. Army Corps of Engineers.

Portable nuclear reactors today


There are major differences between nuclear power production in the 1960s and today.

The Pele reactor’s fuel will be sealed in pellets the size of poppy seeds, and it will be air-cooled so there’s no radioactive coolant to dispose of.

Being able to produce energy with fewer greenhouse emissions is a positive in a warming world. The U.S. military’s liquid fuel use is close to all of Portugal’s or Peru’s. Not having to supply remote bases with as much fuel can also help protect lives in dangerous locations.

But, the U.S. still has no coherent national strategy for nuclear waste disposal, and critics are asking what happens if Pele falls into enemy hands. Researchers at the Nuclear Regulatory Commission and the National Academy of Sciences have previously questioned the risks of nuclear reactors being attacked by terrorists. As proposals for portable reactors undergo review over the coming months, these and other concerns will be drawing attention.

The U.S. military’s first attempts at land-based portable nuclear reactors didn’t work out well in terms of environmental contamination, cost, human health and international relations. That history is worth remembering as the military considers new mobile reactors.

Author
Paul Bierman
Fellow of the Gund Institute for Environment, Professor of Natural Resources, University of Vermont
Disclosure statement
Paul Bierman receives funding from the U.S. National Science Foundation