Wednesday, March 02, 2022

Improved fuel cell performance using semiconductor manufacturing technology

Improved fuel cell performance using semiconductor manufacturing technology
Illustration of the step-by-step synthesis process for the preparation of ternary nanoparticle
 catalysts and electron structure rearrangement by electron transfer between metal atoms
. Credit: Korea Institute of Science and Technology (KIST)

A research team in Korea has synthesized metal nanoparticles that can drastically improve the performance of hydrogen fuel cell catalysts by using semiconductor manufacturing technology. The Korea Institute of Science and Technology (KIST) announced that the research team led by Dr. Sung Jong Yoo of the Hydrogen Fuel Cell Research Center has succeeded in synthesizing nanoparticles by a physical method rather than the existing chemical reactions by using the sputtering technology, which is a thin metal film deposition technology used in semiconductor manufacturing.

Metal nanoparticles have been studied in various fields over the past few decades. Recently,  have been attracting attention as a critical catalyst for hydrogen fuel cells and water electrolysis systems to produce hydrogen. Metal nanoparticles are mainly prepared through complex . In addition, they are prepared using organic substances harmful to the environment and humans. Therefore, additional costs are inevitably incurred for their treatment, and the  conditions are challenging. Therefore, a new nanoparticle synthesis method that can overcome the shortcomings of the existing chemical synthesis is required to establish the hydrogen energy regime.

The sputtering process applied by the KIST research team is a technology that coats a thin metal film during the semiconductor manufacturing process. In this process, plasma is used to cut large metals into nanoparticles, which are then deposited on a substrate to form a thin film. The research team prepared nanoparticles using "glucose," a special substrate that prevented the transformation of the metal nanoparticles to a thin film by using plasma during the process. The synthesis method used the principle of physical vapor deposition using plasma rather than chemical reactions. Therefore, metal nanoparticles could be synthesized using this simple method, overcoming the limitations of the existing chemical synthesis methods.

Improved fuel cell performance using semiconductor manufacturing technology
Low- and high-magnification TEM images of PtCo/C and PtCoV/C. Credit: Korea Institute of Science and Technology (KIST)

The development of new catalysts has been hindered because the existing chemical synthesis methods limited the types of metals that could be used as nanoparticles. In addition, the synthesis conditions must be changed depending on the type of metal. However, it has become possible to synthesize nanoparticles of more diverse metals through the developed synthesis method. In addition, if this technology is simultaneously applied to two or more metals, alloy nanoparticles of various compositions can be synthesized. This would lead to the development of high-performance nanoparticle catalysts based on alloys of various compositions.

The KIST research team synthesized a platinum-cobalt-vanadium alloy nanoparticle catalyst using this technology and applied for the oxygen reduction reaction in hydrogen fuel cell electrodes. As a result, the catalyst activity was seven and three times higher than those of platinum and platinum-cobalt alloy catalysts that are commercially used as catalysts for hydrogen fuel cells, respectively. Furthermore, the researchers investigated the effect of the newly added vanadium on other metals in the nanoparticles. They found that vanadium improved the  performance by optimizing the platinum–oxygen bonding energy through computer simulation.

Dr. Sung Jong Yoo of KIST commented, "Through this research, we have developed a synthesis method based on a novel concept, which can be applied to research focused on  nanoparticles toward the development of water electrolysis systems, solar cells, petrochemicals." He added, "We will strive to establish a complete hydrogen economy and develop carbon-neutral technology by applying alloy  with new structures, which has been difficult to implement, to [develop] eco-friendly energy technologies including  fuel cells." Large-scale synthesis methods for single-atom catalysts for alkaline fuel cells

More information: Injoon Jang et al, Plasma-induced alloying as a green technology for synthesizing ternary nanoparticles with an early transition metal, Nano Today (2021). DOI: 10.1016/j.nantod.2021.101316

Journal information: Nano Today 

Provided by National Research Council of Science & Technology

Biofuels may not be as green as we've been told

Are biofuels better for the environment?
Not necessarily.

By Christopher McFadden
Feb 27, 2022
INTERESTING ENGINEERING

LONG READ


Biofuel factory.photosbyjim/iStock

The combustion engine is, hands down, one of the most important inventions of all time. But, it comes with a very high cost to the environment - hazardous emissions.

While many leaps in efficiency and emission control have been made over the decades, we can never fully eliminate the release of emissions like carbon dioxide into the air. But, what if the fuel for these engines could be grown rather than dug up?

And that is precisely the promise that biofuels have made over the last few decades. However, not everything is all that it seems when it comes to this "holy grail" of clean energy.

What are biofuels?

Biofuels, as the name might suggest, are types of liquid and gas fuels created "naturally" through the conversion of some kind of biomass. While the term can be used to encompass solid fuels, like wood, these are more commonly termed biomass rather than biofuel per se.

For this reason, biomass tends to be used to denote the raw material that biofuels are derived from or those solid fuels that are created by thermally or chemically altering raw materials into things like torrefied pellets or briquettes.

Various forms of biofuels exist but are by far the most commonly used today are ethanol (sometimes called bioethanol) and biodiesel.

The former is an alcohol and is usually blended with more conventional fuels, like gasoline, to increase octane and cut down on the toxic carbon monoxide and smog-causing emissions usually associated with combustion engines. The most common form of the blend, called E10, is a mixture of 10 percent ethanol and 90 percent gasoline.

Some more modern vehicles, called flexible-fuel vehicles, can actually run on another blend of ethanol and gasoline called E85 that contains between 51 percent and 83 percent ethanol. According to the U.S. Department of Energy, roughly 98% of all gasoline you put in your car will contain some percentage of ethanol.

Most ethanol for fuel use is made from plant starches and sugars but there are an increasing number of biofuels in development that use cellulose and hemicellulose. These are the non-edible fibrous material that constitutes the bulk of plant matter. To date, several commercial-scale cellulose-based ethanol biorefineries are currently operational in the United States.

The most common plant "feedstock" used to make ethanol, are corn, grain, and sugar cane.

As with alcohol production, as for your favorite beer or wine, bioethanol is created through the age-old process of fermentation. As with alcoholic beverages, microorganisms like bacteria and yeast use plant sugars as an energy source and produce ethanol as a waste product.

This ethanol can then be fractionated off, distilled, and concentrated ready for use as a liquid fuel. All is well and good, but blending with ethanol does come at a cost.

Seanpanderson/Wikimedia Commons

As the U.S. Department of Energy explains, "ethanol contains less energy per gallon than gasoline, to varying degrees, depending on the volume percentage of ethanol in the blend. Denatured ethanol (98% ethanol) contains about 30% less energy than gasoline per gallon. Ethanol’s impact on fuel economy is dependent on the ethanol content in the fuel and whether an engine is optimized to run on gasoline or ethanol."

Biodiesel, the other most common biofuel, is made from vegetable and animal fats and is generally considered a cleaner-burning direct replacement for petroleum-based diesel fuel. It is non-toxic, biodegradable, and is produced using a combination of alcohol and vegetable oil, animal fat, or recycled cooking grease. It is a mono-alkyl ester produced by the process of transesterification, where the feedstock reacts with an alcohol (such as methanol) in the presence of a catalyst, to produce biodiesel and glycerin.

Like ethanol, biodiesel can be blended with regular diesel to make cleaner fuels. Such fuels range from pure biodiesel, called B100, with the most common blend, B20, consisting of 20 percent biodiesel and 80 percent fossil-fuel diesel.

Just like ethanol, biodiesel is not without its own problems when compared to more traditional fuels. For example, it can be problematic in colder climates as it has a tendency to crystallize. Generally speaking, the less biodiesel content, the better the performance of the fuel in cold climates.

This issue can also be overcome by adding something called a "flow improver" that can be added to the fuel to prevent it from freezing.

Another form of biodiesel that is also quite popular is "green diesel", or renewable diesel. Formed by the hydrocracking of vegetable oils or animal fats (or through gasification, pyrolysis, or other biochemical and thermochemical technologies) to produce a product that is almost indistinguishable from conventional diesel.

Vegetable oil can also be used unmodified as a fuel source in some older diesel engines that don't have common fuel injection systems.

razvanchirnoaga/iStock

Other forms of biofuel also exist, including biogas or biomethane that are created through the process of anaerobic digestion (basically rotting) of organic material. "Syngas," another form of biofuel gas is created by mixing carbon monoxide, hydrogen, and other hydrocarbons through the partial combustion of biomass.

Worldwide biofuel production reached in excess of 43 billion gallons (161 billion liters) in 2021, constituting around 4% of all the world's fuel used for road transportation. This is hoped by some organizations, like the International Energy Agency, to increase to 25% by 2050.

Why are biofuels considered green?

In order to fully answer this question, we need to take a little trip back in time. Around the turn of the 21st-century, many governments around the world were scratching their heads trying to figure out ways to combat the amount of carbon emissions their countries were emitting.

One of the main polluting activities happened to be the cars and trucks used to ferry people and stuff around. In fact, this is one of the largest contributing factors to human global carbon dioxide emissions that, according to some sources, account for almost a quarter of all annual emissions around the world.

The transport sector is also one of the fastest-growing around the world, as a result of the growing use of personal cars in many countries around the world. Of these, the vast majority are still combustion-engine vehicles rather than "cleaner" solutions like the growing electrical vehicle market.

Mailson Pignata/iStock

Something, in their view, needed to be done about this and so the concept of biofuels was proposed as a potential "silver bullet".

Since biofuels are formed, primarily, from the growth and harvesting of living plant material rather than long-sequestered hydrocarbon sources like fossil fuels. The main argument is that biofuels draw down as much carbon dioxide, more or less, as they release when combusted. This is because carbon is stored in the plant tissue and soil as plants grow.

They are, in effect, "carbon neutral", and in some cases have been shown to be carbon negative - in other words, they remove more carbon from the atmosphere than is released when they are harvested, processed, and burned/converted.

There are other benefits to biofuel feedstocks too, including, but not limited to, the generation of co-products like protein that can be as animal feed. This saves energy (and therefore associated carbon dioxide emissions) that would otherwise have been used to make animal feed by other means.

To this end, some countries went full-bore with the idea, with countries like Brazil establishing a full-blown bioethanol industry about 40 years ago. Other countries began to follow suit. In 2005, the United States established its first national renewable fuel standard under the "Energy Policy Act". This piece of legislation called for 7.5 billion gallons of biofuels to be used annually by 2012.

This act also required fossil fuel-producing companies to mix biofuels with regular liquid fuels to reduce their long-term impact on the environment. The European Union produced a similar requirement in its 2008 "Renewable Energy Directive", which required EU countries to source at least 10 percent of their transport fuel from renewable sources by 2020.

The stringent requirement has resulted in a huge growth in the biofuel industry around the world. But, how accurate are the claims that biofuels are better for the environment than, say, fossil fuels?


Are biofuels actually better for the environment?


Like anything in life, it is important to remember that there is no such thing as a perfect solution, only a compromise. For all the benefits that products like biofuels yield, they have some very important drawbacks and even environmental impacts that cannot be ignored if we are being truly honest about them.

For example, the widely held claim that biofuels are carbon-neutral, or even negative, is not all that it claims to be. Various studies on this subject have shown that different biofuels vary widely in their greenhouse emissions when compared, like-for-like, with gasoline.

Biofuel feedstocks have many other associated costs with their production, such as harvesting, processing, and transportation that are often not always factored into calculations or are outright ignored. In some cases, depending on the methods used to produce the feedstock and process the fuel, more greenhouse gas emissions can arise when compared to fossil fuels.

Discussion on this topic tends to place the most emphasis on carbon dioxide, but it is but one of the many harmful emissions that can have very serious consequences on the environment. Nitrous oxide, NOx for short, is another.

Not only is NOx an important component for the formation of harmful effects like acid rain, but it also has a very significant so-called "global warming potential" orders of magnitude higher than carbon dioxide - around 300 times in fact. Not only that, but nitrous oxide stays in the atmosphere for far longer - 114 years compared to about 4 years for carbon dioxide.

NOx also happens to be particularly bad for the ozone layer too - which is not nice. And it doesn't end there.

Nitrous oxides are generated at others stages of biofuel production and final use when combusted as an actual fuel on your car.

aydinmutlu/iStock

In fairness, all forms of agriculture release nitrous oxide to some degree, so biofuel production should not be blamed in isolation for any increase in NOx emissions over the last few decades, but since it is promoted heavily by many governments, there is an urgent need for more research to be done on the impact of biofuels on NOx emissions.

Another potentially important difference between biofuels and conventional fossil fuels is carbonyl emissions. Carbonyl is a divalent chemical unit consisting of carbon (C) and an oxygen (O) atom connected by a double bond and is a constituent part of molecules like carboxylic acids, esters, anhydrides, acyl halides, amides, and quinones, among other compounds.

Some carbonyls, like those listed above, are known to be potentially very hazardous to human health. Studies on this subject have found that biofuels, like biodiesels, release considerably more carbonyl emissions like formaldehyde, acetaldehyde, acrolein, acetone, propionaldehyde, and butyraldehyde, than pure diesel.

Yet other studies have revealed that the combustion of biofuels also comes with an elevated emission of some other hazardous pollutants like volatile organic compounds (VOCs), polycyclic aromatic HCs, and heavy metals) have been reported to endanger human health.

Another very serious issue with biofuels is their direct and indirect greenhouse gas emissions from land-use changes. For example, land clearance of forests or grasslands to release land for biofuel production has a very serious impact on the environment.

Scharfsinn86/iStock

Some studies have shown that this kind of activity can release hundreds to thousands of tonnes of carbon dioxide per hectare for the sake of "saving" 1.8 tonnes per hectare per year for maize grown for bioethanol or 8.6 tonnes per hectare per year for switchgrass. The land is also converted from growing food for consumption to biomass for use in fuel.

This has been reinforced only recently, with a study on the use of corn-ethanol fuels in the U.S. This study identified that the rise in demand for corn as a feedstock for biofuels resulted in a jump in price and, therefore, incentivize the conversion of land for the cultivation of corn.

Other studies also show a serious "opportunity cost" for using valuable land in this way too. If governments are serious about reducing carbon dioxide levels in the atmosphere, the land, if converted from the forest, for example, should have been left the way it was. A better strategy might be to actually reforest existing farmland that has been converted for biofuel production as well.

The conversion of wild virgin lands for biofuel production also has serious implications for biodiversity and local habitats too for obvious reasons. Research also suggests that the production of biofuel feedstocks such as corn and soy, could increase water pollution from nutrients, pesticides, and sediment and deplete aquifers.

There is another area where biofuels appear to be considerably safer for the environment, however - biodegradation. Various studies have shown that biofuels, neat vegetable oils, biodiesel, petroleum diesel blends, and neat 2-D diesel fuel tend to break down in the environment much faster than conventional petrol or diesel.

Under controlled conditions, these substances are broken down about 5 times faster than petroleum or diesel and also leave far fewer toxic byproducts. This is encouraging and would indicate that events such as oil spills could be less of an environmental disaster if the tanker is filled with biofuels.

So, are biofuels all they are cracked up to be?


In short, yes, but also no.
While some biofuels are clearly better for the environment than continuing to dig up and burn fossil fuels, a more all-encompassing view needs to be taken by regulators and decision-makers. Not all biofuels and methods of biofuel production have the same impact

.
BanksPhotos/iStock

We have already covered some of the issues above, like focussing on reforestation instead, but other things can also be done, too.

Energy conservation and improved efficiency are critical factors. The combustion engine, while receiving a lot of bad press over recent decades, is still one of the best machines for converting fuel to do useful work that our species has ever devised.


It has quite literally revolutionized our way of life. If more focus was put on improving their efficiency than outright banning them, significant improvements in emissions could be made over time.

According to some studies, in the United States, an improvement in efficiency by only one mile per gallon for every vehicle has been shown to reduce greenhouse gas emissions more than all "savings" provided by all biofuel maize production. Other incremental improvements are also in development that could further improve the efficiency of combustion engines too.

One example, called Transient Plasma Ignition, is a like-for-like replacement for traditional spark plugs that have been shown to dramatically increase combustion efficiency in combustion engines by as much as 20%. These kinds of spark plugs also benefit from a longer lifespan than conventional ones.


But, such solutions still rely on the digging up and use of fossil fuels. With the drive, pun intended, for the decarbonization of many economies around the world, such technologies only realistically offer a brief reprieve for the internal combustion engine going forward.

But, that doesn't mean research and development in this field should be eased up. Any benefits made to the efficiency of internal combustion can be used to also, albeit indirectly, increase the efficiency of biofuels in combustion engines.
Keeratikorn Suttiwong/iStock

But, there are some areas where biofuels are clearly advantageous. In many cases, biowaste from other industrial processes can be turned into biofuels rather than thrown away. Whether it be digestion of waste food to make biogas/bio-LPG or turning waste products from beer production into biofuel.

And it is this that might, in the end, might be the main benefit of biofuels over their conventional fossil fuel alternatives. If more emphasis is put on using the waste products from existing processes to make biofuels rather than converting virgin land or agricultural land for feedstock crops, we can get all the benefits of biofuels with less of its environmental costs.

That is, of course, as long as our species continues to use combustion-based energy production. Which is likely going to continue for many years to come.

They are, after all, so good at what they do.







 

High Oil Prices Aren’t Enough To Tempt Shale Producers

  • America’s shale industry is looking to ramp up production, but it is facing two major hurdles that could curb its trajectory.

  • Supply chain issues, runaway inflation and a growing labor shortage have hindered the industry’s ability to increase output. 

  • "Even if the president wants us to grow, I just don't think the industry can grow anyway," said Pioneer CEO Scott Sheffield.

U.S. shale production is back in growth mode, but inflation and supply chain bottlenecks could hobble the growth trajectory this year despite the tempting economics of $100 oil.  

The United States is set to post an annual record of 12.6 million barrels per day (bpd) of crude oil production in 2023, while this year's average is forecast at 12 million bpd, up by 760,000 bpd from last year, according to EIA's latest estimates.

Yet, cost inflation, labor and equipment shortages, and continued restraint in spending and drilling from the biggest public independents could slow output growth. The U.S. shale patch is set to play a more minor role in potentially bringing down international crude oil prices and American gasoline prices than it did in the previous upcycles when annual growth topped 1.2 million bpd in 2019 and 1.6 million bpd in 2018.

'Headwinds To Growth'

"We think the U.S. is definitely going to face some headwinds in growth on this year," Ezra Yacob, chief executive of shale giant EOG Resources, said on the earnings call last week.

"When we think about the growth forecasts that are out there and have been publicly discussed, we're probably a bit more on the lower end in general on the crude and condensate side. And the reason for that is I think you're seeing commitment from the North American E&P space to remain disciplined and then you couple that with some of the inflationary and supply chain pressures," Yacob added.

EOG Resources president and chief operating officer Billy Helms noted that there are a lot of headwinds for the U.S. shale patch to ramp up activity and grow production this year.

Related: Oil Prices Retreat As Biden Leaves Energy Out Of Sanctions Package

Equipment and labor constraints are some of those headwinds, Helms said on the call, giving as examples challenges in attracting workers for the drilling and frac stages and the fact that "most of the good equipment is already under employment today."

"And hopefully, the industry can strengthen and get better on a go-forward basis. But this year is going to be a challenging year from that side," said Helms.

Over the past weeks, other shale producers and oilfield services providers have flagged headwinds to this year's growth. For example, frac sand in the biggest shale play, the Permian, is in short supply, threatening to slow drilling programs at some producers and sending sand prices skyrocketing. This adds further cost pressure to American oil producers, who are already grappling with cost inflation in equipment and labor shortages.  

$100 oil could unleash a lot more U.S. oil production, in theory, but supply chain constraints and record-high sand prices are likely to temper growth, analysts say.

"There is no doubt, the much-anticipated multiyear upcycle is now underway," Jeff Miller, CEO at the biggest fracking services provider, Halliburton, said on the Q4 earnings call in January. But he also noted that "As activity accelerates, the market is seeing tightness related to trucking, labor, sand, and other inputs."

Biggest Independents Rein In Production Growth

Supply chain and cost inflation aside, the largest public independents in the U.S. shale patch are not racing to pump too much crude, even at $100 oil.

EOG Resources, for example, guides for crude and condensate production in the range of 455,000 to 467,000 bopd for 2022, compared to 443,000 bopd for 2021, suggesting that one of the biggest listed independents follows the other public shale firms in pledging to cap growth and return more cash to shareholders.

Pioneer Natural Resources, the biggest oil producer in the Permian, will not open the taps and will stick to discipline even at $200 oil, says chief executive Scott Sheffield.  

"Whether it's $150 oil, $200 oil, or $100 oil, we're not going to change our growth plans," Sheffield told Bloomberg Television in an interview last month.  

The capital discipline from the public independents in the U.S. shale patch doesn't bode well for U.S. gasoline prices and for President Biden's approval ratings. Yet, companies like Pioneer Natural Resources, Continental Resources, and Devon Energy are keeping discipline and plan to grow production by no more than 5 percent annually. Diamondback Energy is also part of that crowd.

"Diamondback's team and board believe that we have no reason to put growth before returns. Our shareholders, the owners of our company, agreed. And as a result, we will continue to be disciplined, keeping our oil production flat this year," chairman and CEO Travis Stice said on the earnings call last week.

Capex discipline from the largest shale firms and the supply chain bottlenecks for many producers will cap U.S. oil production growth, according to Pioneer's Sheffield. 

"Several other producers are having trouble getting frack crews, they're having trouble getting labor and they're having trouble getting sand; that's going to keep anybody from growing," he told Bloomberg in February. 

"Even if the president wants us to grow, I just don't think the industry can grow anyway," said Sheffield.  

 By Tsvetana Paraskova for Oilprice.com

Could Carbon Markets Be Impacted Due to European Pipeline Freeze?

ByCarbon S



Germany has placed a hold on the Nordstream 2 underground natural gas pipeline, which runs between Germany and Russia. The channel is almost complete and operable.

This move by Germany will not impact the immediate supply of natural gas. However, it could affect Europe’s already strained natural gas resources – and impact carbon markets.
Natural gas and oil prices have increased.

Fears that Russia will withhold future natural gas have driven prices up to $90 per megawatt-hour. That’s an increase of 10%.

The price of oil – also a Russian export to Europe – rose 1.5% ($99.50 per barrel). This is the highest level Europe has seen since 2014.

U.S. natural gas prices also increased, though less than in Europe.

Luke Oliver, Managing Director and Head of Strategy at KraneShares, told ETF trends, “Halting certification of the Nordstream2 gas pipeline will put increasing pressure on natural gas prices, which in turn make fuel switching from coal to gas more expensive and increases demand for carbon allowances.”

Simply put, if the price of switching from coal to gas is higher, the demand on carbon markets may increase even more.

Oliver went on to say, “This puts pressure on the entire energy complex. This is no doubt positive for a carbon price, albeit sadly not necessarily positive for emission reductions.”

Per Oliver, “2022 has been an interesting year already for carbon markets. With new proposals around upper price band triggers, we’ve seen some volatility; however, our modeling would suggest that even IF the proposal was adopted, it wouldn’t meaningfully limit upside potential.”

Only time will tell how this will impact the energy sector and carbon markets.

Regardless of what will be, one thing is for sure: we all hope for peace in the region.

MiHoYo's spending that Genshin cash on an experimental fusion reactor

By Rich Stanton 
PC GAMER

'Tech otakus save the world' is the company motto, after all.

(Image credit: miHoYo)

Developer miHoYo has been around since 2012, but 2020's Genshin Impact was its first global success: the game remains enormously popular, but in its first year made over $2 billion from the mobile version alone. The company has quite a charming motto—Tech otakus save the world—and, with this big ol' bunch of money burning a hole in its metaphorical pocket, has decided to have a go at living up to those words.

miHoYo recently led a funding round alongside NIO Capital, a Chinese investment firm, and a total of $63 million will be invested in a company called Energy Singularity (as per Beijing's PanDaily). The funds will be used for R&D of a "small tokamak experimental device based on high temperature superconducting material, and advanced magnet systems that can be used for the next generation of high-performance fusion devices."

What's a tokamak to you? A tokamak is a plant design concept for nuclear fusion, wherein plasma is confined using magnetic fields in a donut shape: a torus. It is considered, by people who know about these things, to be the most realistic and achievable nuclear fusion design.


The long-and-short of it is that nuclear fusion (where two atomic nuclei combine to create a heavier nuclei, releasing energy) will in theory have huge advantages over nuclear fission (where a nucleus is split). However despite being theorised about and researched since the 1940s, no working nuclear fusion reactor has ever been built. If it can be done, this technology could change everything about global energy supplies and become a major tool in fighting climate change (fusion even produces less waste than fission).

So, perhaps in 100 years they'll be writing textbooks about how thirsty weebs inadvertently saved the planet by buying bunny costumes.

miHoYo's not just into nuclear fusion: last year it funded a lab studying brain-computer interface technologies, and how they could possibly be used to treat depression.

This is the first major investment in Energy Singularity, which was founded in 2021 by experts in various fields, and is dedicated to creating commercialised fusion technology. After this funding it will focus on developing what it calls an "experimental advanced superconducting Tokamak (EAST)."

Yes that name does sound a bit like a boss fight. Genshin Impact, meanwhile, continues to receive regular updates, with 2.5 arriving just under a month ago and driving fans bonkers with its new characters.

Rich Stanton
Rich is a games journalist with 15 years' experience, beginning his career on Edge magazine before working for a wide range of outlets, including Ars Technica, Eurogamer, GamesRadar+, Gamespot, the Guardian, IGN, the New Statesman, Polygon, and Vice. He was the editor of Kotaku UK, the UK arm of Kotaku, for three years before joining PC Gamer. He is the author of a Brief History of Video Games, a full history of the medium, which the Midwest Book Review described as "[a] must-read for serious minded game historians and curious video game connoisseurs alike."

Swiss Plasma Center and DeepMind Use AI To Control Plasmas for Nuclear Fusion

Plasma Inside TCV Tokamak

Plasma inside the TCV tokamak. Credit: Curdin Wüthrich /SPC/EPFL  LOOK ITS A HAPPY FACE

Scientists at EPFL’s Swiss Plasma Center and DeepMind have jointly developed a new method for controlling plasma configurations for use in nuclear fusion research.

EPFL’s Swiss Plasma Center (SPC) has decades of experience in plasma physics and plasma control methods. DeepMind is a scientific discovery company acquired by Google in 2014 that’s committed to ‘solving intelligence to advance science and humanity. Together, they have developed a new magnetic control method for plasmas based on deep reinforcement learning, and applied it to a real-world plasma for the first time in the SPC’s tokamak research facility, TCV. Their study has just been published in Nature.

Tokamaks are donut-shaped devices for conducting research on nuclear fusion, and the SPC is one of the few research centers in the world that has one in operation. These devices use a powerful magnetic field to confine plasma at extremely high temperatures – hundreds of millions of degrees Celsius, even hotter than the sun’s core – so that nuclear fusion can occur between hydrogen atoms. The energy released from fusion is being studied for use in generating electricity. What makes the SPC’s tokamak unique is that it allows for a variety of plasma configurations, hence its name: variable-configuration tokamak (TCV). That means scientists can use it to investigate new approaches for confining and controlling plasmas. A plasma’s configuration relates to its shape and position in the device.

Deep Reinforcement Learning Steers Fusion Plasma

The controller trained with deep reinforcement learning steers the plasma through multiple phases of an experiment. On the left, there is an inside view in the tokamak during the experiment. On the right, you can see the reconstructed plasma shape and the target points we wanted to hit. Credit: DeepMind & SPC/EPFL

Controlling a substance as hot as the Sun

Tokamaks form and maintain plasmas through a series of magnetic coils whose settings, especially voltage, must be controlled carefully. Otherwise, the plasma could collide with the vessel walls and deteriorate. To prevent this from happening, researchers at the SPC first test their control systems configurations on a simulator before using them in the TCV tokamak. “Our simulator is based on more than 20 years of research and is updated continuously,” says Federico Felici, an SPC scientist and co-author of the study. “But even so, lengthy calculations are still needed to determine the right value for each variable in the control system. That’s where our joint research project with DeepMind comes in.”

TCV Vacuum Vessel 3D Model

3D model of the TCV vacuum vessel containing the plasma, surrounded by various magnetic coils to keep the plasma in place and to affect its shape. Credit: DeepMind & SPC/EPFL

DeepMind’s experts developed an AI algorithm that can create and maintain specific plasma configurations and trained it on the SPC’s simulator. This involved first having the algorithm try many different control strategies in simulation and gathering experience. Based on the collected experience, the algorithm generated a control strategy to produce the requested plasma configuration. This involved first having the algorithm run through a number of different settings and analyze the plasma configurations that resulted from each one. Then the algorithm was called on to work the other way – to produce a specific plasma configuration by identifying the right settings. After being trained, the AI-based system was able to create and maintain a wide range of plasma shapes and advanced configurations, including one where two separate plasmas are maintained simultaneously in the vessel. Finally, the research team tested their new system directly on the tokamak to see how it would perform under real-world conditions.

Range of Different Plasma Shapes

Range of different plasma shapes generated with the reinforcement learning controller. Credit: DeepMind & SPC/EPFL

The SPC’s collaboration with DeepMind dates back to 2018 when Felici first met DeepMind scientists at a hackathon at the company’s London headquarters. There he explained his research group’s tokamak magnetic-control problem. “DeepMind was immediately interested in the prospect of testing their AI technology in a field such as nuclear fusion, and especially on a real-world system like a tokamak,” says Felici. Martin Riedmiller, control team lead at DeepMind and co-author of the study, adds that “our team’s mission is to research a new generation of AI systems – closed-loop controllers – that can learn in complex dynamic environments completely from scratch. Controlling a fusion plasma in the real world offers fantastic, albeit extremely challenging and complex, opportunities.”

A win-win collaboration

After speaking with Felici, DeepMind offered to work with the SPC to develop an AI-based control system for its tokamak. “We agreed to the idea right away, because we saw the huge potential for innovation,” says Ambrogio Fasoli, the director of the SPC and a co-author of the study. “All the DeepMind scientists we worked with were highly enthusiastic and knew a lot about implementing AI in control systems.” For his part, Felici was impressed with the amazing things DeepMind can do in a short time when it focuses its efforts on a given project.

The collaboration with the SPC pushes us to improve our reinforcement learning algorithms.
— Brendan Tracey, senior research engineer, DeepMind

DeepMind also got a lot out of the joint research project, illustrating the benefits to both parties of taking a multidisciplinary approach. Brendan Tracey, a senior research engineer at DeepMind and co-author of the study, says: “The collaboration with the SPC pushes us to improve our reinforcement learning algorithms, and as a result can accelerate research on fusing plasmas.”

This project should pave the way for EPFL to seek out other joint R&D opportunities with outside organizations. “We’re always open to innovative win-win collaborations where we can share ideas and explore new perspectives, thereby speeding the pace of technological development,” says Fasoli.

Reference: “Magnetic control of tokamak plasmas through deep reinforcement learning” by Jonas Degrave, Federico Felici, Jonas Buchli, Michael Neunert, Brendan Tracey, Francesco Carpanese, Timo Ewalds, Roland Hafner, Abbas Abdolmaleki, Diego de las Casas, Craig Donner, Leslie Fritz, Cristian Galperti, Andrea Huber, James Keeling, Maria Tsimpoukelli, Jackie Kay, Antoine Merle, Jean-Marc Moret, Seb Noury, Federico Pesamosca, David Pfau, Olivier Sauter, Cristian Sommariva, Stefano Coda, Basil Duval, Ambrogio Fasoli, Pushmeet Kohli, Koray Kavukcuoglu, Demis Hassabis and Martin Riedmiller, 16 February 2022, Nature.
DOI: 10.1038/s41586-021-04301-9

NASA’S NEW SHORTCUT TO FUSION POWER

Lattice confinement fusion eliminates massive magnets and powerful lasers


BAYARBADRAKH BARAMSAI THERESA BENYO LAWRENCE FORSLEY BRUCE STEINETZ
27 FEB 2022

EDMON DE HARO

PHYSICISTS FIRST SUSPECTED more than a century ago that the fusing of hydrogen into helium powers the sun. It took researchers many years to unravel the secrets by which lighter elements are smashed together into heavier ones inside stars, releasing energy in the process. And scientists and engineers have continued to study the sun’s fusion process in hopes of one day using nuclear fusion to generate heat or electricity. But the prospect of meeting our energy needs this way remains elusive.

The extraction of energy from nuclear fission, by contrast, happened relatively quickly. Fission in uranium was discovered in 1938, in Germany, and it was only four years until the first nuclear “pile” was constructed in Chicago, in 1942.

There are currently about 440 fission reactors operating worldwide, which together can generate about 400 gigawatts of power with zero carbon emissions. Yet these fission plants, for all their value, have considerable downsides. The enriched uranium fuel they use must be kept secure. Devastating accidents, like the one at Fukushima in Japan, can leave areas uninhabitable. Fission waste by-products need to be disposed of safely, and they remain radioactive for thousands of years. Consequently, governments, universities, and companies have long looked to fusion to remedy these ills.

Among those interested parties is NASA. The space agency has significant energy needs for deep-space travel, including probes and crewed missions to the moon and Mars. For more than 60 years, photovoltaic cellsfuel cells, or radioisotope thermoelectric generators (RTGs) have provided power to spacecraft. RTGs, which rely on the heat produced when nonfissile plutonium-238 decays, have demonstrated excellent longevity—both Voyager probes use such generators and remain operational nearly 45 years after their launch, for example. But these generators convert heat to electricity at roughly 7.5 percent efficiency. And modern spacecraft need more power than an RTG of reasonable size can provide.

One promising alternative is lattice confinement fusion (LCF), a type of fusion in which the nuclear fuel is bound in a metal lattice. The confinement encourages positively charged nuclei to fuse because the high electron density of the conductive metal reduces the likelihood that two nuclei will repel each other as they get closer together.

 
The deuterated erbium (chemical symbol ErD3) is placed into thumb-size vials, as shown in this set of samples from a 20 June 2018 experiment. Here, the vials are arrayed pre-experiment, with wipes on top of the metal to keep the metal in position during the experiment. The metal has begun to crack and break apart, indicating it is fully saturated. 
NASA

The vials are placed upside down to align the metal with the gamma ray beam. Gamma rays have turned the clear glass amber.
NASA

We and other scientists and engineers at NASA Glenn Research Center, in Cleveland, are investigating whether this approach could one day provide enough power to operate small robotic probes on the surface of Mars, for example. LCF would eliminate the need for fissile materials such as enriched uranium, which can be costly to obtain and difficult to handle safely. LCF promises to be less expensive, smaller, and safer than other strategies for harnessing nuclear fusion. And as the technology matures, it could also find uses here on Earth, such as for small power plants for individual buildings, which would reduce fossil-fuel dependency and increase grid resiliency.

Physicists have long thought that fusion should be able to provide clean nuclear power. After all, the sun generates power this way. But the sun has a tremendous size advantage. At nearly 1.4 million kilometers in diameter, with a plasma core 150 times as dense as liquid water and heated to 15 million °C, the sun uses heat and gravity to force particles together and keep its fusion furnace stoked.

On Earth, we lack the ability to produce energy this way. A fusion reactor needs to reach a critical level of fuel-particle density, confinement time, and plasma temperature (called the Lawson Criteria after creator John Lawson) to achieve a net-positive energy output. And so far, nobody has done that.

Lighting the Fusion Fire


In lattice confinement fusion (LCF), a beam of gamma rays is directed at a sample of erbium [shown here] or titanium saturated with deuterons. Occasionally, gamma rays of sufficient energy will break apart a deuteron in the metal lattice into its constituent proton and neutron.


The neutron collides with another deuteron in the lattice, imparting some of its own momentum to the deuteron. The electron-screened deuteron is now energetic enough to overcome the Coulomb barrier, which would typically repel it from another deuteron.


Deuteron-Deuteron Fusion


When the energetic deuteron fuses with another deuteron in the lattice, it can produce a helium-3 nucleus (helion) and give off useful energy. A leftover neutron could provide the push for another energetic deuteron elsewhere.

Alternatively, the fusing of the two deuterons could result in a hydrogen-3 nucleus (triton) and a leftover proton. This reaction also produces useful energy.

Stripping and OP Reaction


Another possible reaction in lattice confinement fusion would happen if an erbium atom instead rips apart the energetic deuteron and absorbs the proton. The extra proton changes the erbium atom to thulium and releases energy.

If the erbium atom absorbs the neutron, it becomes a new isotope of erbium. This is an Oppenheimer-Phillips (OP) stripping reaction. The proton from the broken-apart deuteron heats the lattice.

Fusion reactors commonly utilize two different hydrogen isotopes: deuterium (one proton and one neutron) and tritium (one proton and two neutrons). These are fused into helium nuclei (two protons and two neutrons)—also called alpha particles—with an unbound neutron left over.

Existing fusion reactors rely on the resulting alpha particles—and the energy released in the process of their creation—to further heat the plasma. The plasma will then drive more nuclear reactions with the end goal of providing a net power gain. But there are limits. Even in the hottest plasmas that reactors can create, alpha particles will mostly skip past additional deuterium nuclei without transferring much energy. For a fusion reactor to be successful, it needs to create as many direct hits between alpha particles and deuterium nuclei as possible.

In the 1950s, scientists created various magnetic-confinement fusion devices, the most well known of which were Andrei Sakharov’s tokamak and Lyman Spitzer’s stellarator. Setting aside differences in design particulars, each attempts the near-impossible: Heat a gas enough for it to become a plasma and magnetically squeeze it enough to ignite fusion—all without letting the plasma escape.

Inertial-confinement fusion devices followed in the 1970s. They used lasers and ion beams either to compress the surface of a target in a direct-drive implosion or to energize an interior target container in an indirect-drive implosion. Unlike magnetically confined reactions, which can last for seconds or even minutes (and perhaps one day, indefinitely), inertial-confinement fusion reactions last less than a microsecond before the target disassembles, thus ending the reaction.

Both types of devices can create fusion, but so far they are incapable of generating enough energy to offset what’s needed to initiate and maintain the nuclear reactions. In other words, more energy goes in than comes out. Hybrid approaches, collectively called magneto-inertial fusion, face the same issues.

Who’s Who in the Fusion Zoo






Proton: Positively charged protons (along with neutrons) make up atomic nuclei. One component of lattice confinement fusion (LCF) may occur when a proton is absorbed by an erbium atom in a deuteron stripping reaction.







Neutron: Neutrally charged neutrons (along with protons) make up atomic nuclei. In fusion reactions, they impart energy to other particles such as deuterons. They also can be absorbed in Oppenheimer-Phillips reactions.







Erbium & Titanium: Erbium and titanium are the metals of choice for LCF. Relatively colossal compared with the other particles involved, they hold the deuterons and screen them from one another.







Deuterium: Deuterium is hydrogen with one proton and one neutron in its nucleus (hydrogen with just the proton is protium). Deuterium’s nucleus, call a deuteron, is crucial to LCF.







Deuteron: The nucleus of a deuterium atom. Deuterons are vital to LCF—the actual fusion instances occur when an energetic deuteron smashes into another in the lattice. They can also be broken apart in stripping reactions.







Hydrogen-3 (Tritium): One possible resulting particle from deuteron-deuteron fusion, alongside a leftover proton. Tritium has one proton and two neutrons in its nucleus, which is also called a triton.







Helium-3: One possible resulting particle from deuteron-deuteron fusion, alongside a leftover neutron. Helium-3 has two protons and one neutron in its nucleus, which is also called a helion.







Alpha particle: The core of a normal helium atom (two protons and two neutrons). Alpha particles are a commonplace result of typical fusion reactors, which often smash deuterium and tritium particles together. They can also emerge from LCF reactions.







Gamma ray: Extremely energetic photons that are used to kick off the fusion reactions in a metal lattice by breaking apart deuterons.

Current fusion reactors also require copious amounts of tritium as one part of their fuel mixture. The most reliable source of tritium is a fission reactor, which somewhat defeats the purpose of using fusion.

The fundamental problem of these techniques is that the atomic nuclei in the reactor need to be energetic enough—meaning hot enough—to overcome the Coulomb barrier, the natural tendency for the positively charged nuclei to repel one another. Because of the Coulomb barrier, fusing atomic nuclei have a very small fusion cross section, meaning the probability that two particles will fuse is low. You can increase the cross section by raising the plasma temperature to 100 million °C, but that requires increasingly heroic efforts to confine the plasma. As it stands, after billions of dollars of investment and decades of research, these approaches, which we’ll call “hot fusion,” still have a long way to go.

The barriers to hot fusion here on Earth are indeed tremendous. As you can imagine, they’d be even more overwhelming on a spacecraft, which can’t carry a tokamak or stellarator onboard. Fission reactors are being considered as an alternative—NASA successfully tested the Kilopower fission reactor at the Nevada National Security Site in 2018 using a uranium-235 core about the size of a paper towel roll. The Kilopower reactor could produce up to 10 kilowatts of electric power. The downside is that it required highly enriched uranium, which would have brought additional launch safety and security concerns. This fuel also costs a lot.

But fusion could still work, even if the conventional hot-fusion approaches are nonstarters. LCF technology could be compact enough, light enough, and simple enough to serve for spacecraft.

How does LCF work? Remember that we earlier mentioned deuterium, the isotope of hydrogen with one proton and one neutron in its nucleus. Deuterided metals—erbium and titanium, in our experiments—have been “saturated” with either deuterium or deuterium atoms stripped of their electrons (deuterons). This is possible because the metal naturally exists in a regularly spaced lattice structure, which creates equally regular slots in between the metal atoms for deuterons to nest.

In a tokamak or a stellarator, the hot plasma is limited to a density of 10 14 deuterons per cubic centimeter. Inertial-confinement fusion devices can momentarily reach densities of 1026 deuterons per cubic centimeter. It turns out that metals like erbium can indefinitely hold deuterons at a density of nearly 1023 per cubic centimeter—far higher than the density that can be attained in a magnetic-confinement device, and only three orders of magnitude below that attained in an inertial-confinement device. Crucially, these metals can hold that many ions at room temperature.

The deuteron-saturated metal forms a plasma with neutral charge. The metal lattice confines and electron-screens the deuterons, keeping each of them from “seeing” adjacent deuterons (which are all positively charged). This screening increases the chances of more direct hits, which further promotes the fusion reaction. Without the electron screening, two deuterons would be much more likely to repel each other.

Using a metal lattice that has screened a dense, cold plasma of deuterons, we can jump-start the fusion process using what is called a Dynamitron electron-beam accelerator. The electron beam hits a tantalum target and produces gamma rays, which then irradiate thumb-size vials containing titanium deuteride or erbium deuteride.

When a gamma ray of sufficient energy—about 2.2 megaelectron volts (MeV)—strikes one of the deuterons in the metal lattice, the deuteron breaks apart into its constituent proton and neutron. The released neutron may collide with another deuteron, accelerating it much as a pool cue accelerates a ball when striking it. This second, energetic deuteron then goes through one of two processes: screened fusion or a stripping reaction.

In screened fusion, which we have observed in our experiments, the energetic deuteron fuses with another deuteron in the lattice. The fusion reaction will result in either a helium-3 nucleus and a leftover neutron or a hydrogen-3 nucleus and a leftover proton. These fusion products may fuse with other deuterons, creating an alpha particle, or with another helium-3 or hydrogen-3 nucleus. Each of these nuclear reactions releases energy, helping to drive more instances of fusion.

In a stripping reaction, an atom like the titanium or erbium in our experiments strips the proton or neutron from the deuteron and captures that proton or neutron. Erbium, titanium, and other heavier atoms preferentially absorb the neutron because the proton is repulsed by the positively charged nucleus (called an Oppenheimer-Phillips reaction). It is theoretically possible, although we haven’t observed it, that the electron screening might allow the proton to be captured, transforming erbium into thulium or titanium into vanadium. Both kinds of stripping reactions would produce useful energy.

As it stands, after billions of dollars of investment and decades of research, these approaches, which we’ll call “hot fusion,” still have a long way to go.

To be sure that we were actually producing fusion in our vials of erbium deuteride and titanium deuteride, we used neutron spectroscopy. This technique detects the neutrons that result from fusion reactions. When deuteron-deuteron fusion produces a helium-3 nucleus and a neutron, that neutron has an energy of 2.45 MeV. So when we detected 2.45 MeV neutrons, we knew fusion had occurred. That’s when we published our initial results in Physical Review C.

Electron screening makes it seem as though the deuterons are fusing at a temperature of 11 million °C. In reality, the metal lattice remains much cooler than that, although it heats up somewhat from room temperature as the deuterons fuse.


 
Rich Martin [left], a research engineer, and coauthor Bruce Steinetz, principal investigator for the LCF project’s precursor experiment, examine samples after a run. NASA

Overall, in LCF, most of the heating occurs in regions just tens of micrometers across. This is far more efficient than in magnetic- or inertial-confinement fusion reactors, which heat up the entire fuel amount to very high temperatures. LCF isn’t cold fusion—it still requires energetic deuterons and can use neutrons to heat them. However, LCF also removes many of the technologic and engineering barriers that have prevented other fusion schemes from being successful.

Although the neutron recoil technique we’ve been using is the most efficient means to transfer energy to cold deuterons, producing neutrons from a Dynamitron is energy intensive. There are other, lower energy methods of producing neutrons including using an isotopic neutron source, like americium-beryllium or californium-252, to initiate the reactions. We also need to make the reaction self-sustaining, which may be possible using neutron reflectors to bounce neutrons back into the lattice—carbon and beryllium are examples of common neutron reflectors. Another option is to couple a fusion neutron source with fission fuel to take advantage of the best of both worlds. Regardless, there’s more development of the process required to increase the efficiency of these lattice-confined nuclear reactions.

We’ve also triggered nuclear reactions by pumping deuterium gas through a thin wall of a palladium-silver alloy tubing, and by electrolytically loading palladium with deuterium. In the latter experiment, we’ve detected fast neutrons. The electrolytic setup is now using the same neutron-spectroscopy detection method we mentioned above to measure the energy of those neutrons. The energy measurements we get will inform us about the kinds of nuclear reaction that produce them.

We’re not alone in these endeavors. Researchers at Lawrence Berkeley National Laboratory, in California, with funding from Google Research, achieved favorable results with a similar electron-screened fusion setup. Researchers at the U.S. Naval Surface Warfare Center, Indian Head Division, in Maryland have likewise gotten promising initial results using an electrochemical approach to LCF. There are also upcoming conferences: the American Nuclear Society’s Nuclear and Emerging Technologies for Space conference in Cleveland in May and the International Conference on Cold Fusion 24, focused on solid-state energy, in Mountain View, Calif., in July.

Any practical application of LCF will require efficient, self-sustaining reactions. Our work represents just the first step toward realizing that goal. If the reaction rates can be significantly boosted, LCF may open an entirely new door for generating clean nuclear energy, both for space missions and for the many people who could use it here on Earth.

FROM YOUR SITE ARTICLES
Magnetic-Confinement Fusion Without the Magnets - IEEE Spectrum ›


Bayarbadrakh Baramsai
is a systems engineer at NASA Glenn Research Center contributing to on the lattice confinement fusion project.,

Theresa Benyo
is a physicist and the principal investigator for the lattice confinement fusion project at NASA Glenn Research Center.,

Lawrence Forsley
is the deputy principal investigator for NASA’s lattice confinement fusion project, based at NASA Glenn Research Center.

Bruce Steinetz
is a senior technologist at NASA Glenn Research Center involved in the lattice confinement fusion project.