Thursday, September 08, 2022

Calgary

Government emissions targets for fertilizer use unrealistic, industry report argues

Farmers likely to achieve half of federal government's targeted 30 per cent reduction, it says

Harvested wheat fields near Cremona, Alta. (Jeff McIntosh/The Canadian Press)

A new industry-led report suggests Canada's farmers can likely only achieve half of the federal government's targeted 30 per cent reduction in fertilizer emissions by 2030.

The report, commissioned by Fertilizer Canada and the Canola Council of Canada, examines what effect a 30 per cent reduction in greenhouse gas emissions from the use of nitrogen-based fertilizers on Canadian farms would have on crop yields and farm financial viability.

The report concludes that it may be possible to achieve a 14 per cent reduction in emissions from fertilizer by 2030, but that reaching 30 per cent is not "realistically achievable without imposing significant costs on Canada's crop producers and potentially damaging the financial health of Canada's crop production sector."

"I believe what (this report) is saying is the 30 per cent reduction target is not achievable without putting production and exports in jeopardy, and we've been saying that all along," said Tom Steve, general manager of the Alberta Wheat and Barley Commissions.

"It was an arbitrary target that was set somewhere in the government, with no path as to how it was going to be achieved."

Emissions goal too ambitious: farmers

Ottawa first established its 30 per cent target for fertilizer emissions reduction in late 2020, as part of the federal government's overall climate change plan, and recently wrapped up a months-long consultation process on it.

According to the government, between 2005 and 2019, fertilizer use on Canadian farms increased by 71 per cent. Over the same period, fertilizer-related emissions of nitrous oxide (a greenhouse gas 365 times more potent, from a global warming perspective, than carbon dioxide) in Canada increased by 54 per cent. In 2019 alone, according to the government, the application of nitrogen-based fertilizer resulted in 12.75 million tonnes of greenhouse gas emissions — the equivalent to that produced by 3.9 million passenger vehicles.

A farmer works a potato field in North Tryon, Prince Edward Island. (Andrew Vaughan/The Canadian Press)

The government has said its 30 per cent target is a goal, not a mandatory enforceable target. It has also said it believes the target is achievable, since many of the required technologies and practices to reduce emissions from fertilizer use already exist.

Still, farmers have warned the goal is too ambitious, especially at a time when Canada's agriculture industry is being asked to produce more to help address global food security fears.

"It's really taken our eye off the ball of what is needed in our industry, which is to become more efficient and productive and competitive," Steve said.

"Most farmers already do whatever they can to reduce their use of fertilizer — it's their most expensive input."

Best practices

Karen Proud, president and chief executive of Fertilizer Canada, said there are already a number of industry-accepted best practices in place when it comes to fertilizer management.

These include using the right fertilizer for the soil, as well as applying it at the right time of year and in the right amounts.

By helping more farmers become aware of these practices and encouraging them to adopt them, Proud said, the industry could potentially achieve a 14 per cent reduction in emissions by 2030.

While that's an aggressive target, she said, it would strike a balance between the needs of the environment and the need for continued increased food production going forward.

Proud said that going beyond a 14 per cent reduction by 2030 would be economically unviable, since many of the changes required — such as working with a certified crop advisor, or doing soil testing — are costly for the farmer.

"We need to be able to allow farmers to increase their productivity to offset the costs of implementing these best practices," she said.

"The only way to do that is you allow them to increase yields, or the math doesn't work. You can't ask farmers to invest in practices at a loss."

Canada announces funding

In February of this year, the federal government announced funding of up to $182.7 million for 12 recipient organizations to deliver the On-Farm Climate Action Fund across Canada.

Through the fund, Canadian farmers will be eligible to receive direct support for environmental best practices, including nitrogen fertilizer management, soil sampling and analysis, and equipment modifications for fertilizer application in fields.

Canada has set the goal of achieving net-zero greenhouse gas emissions by 2050.

According to the federal government, the agriculture sector has generated approximately 10 per cent of Canada's total greenhouse gas emissions annually since 1990.

Turning carbon dioxide into valuable products

Turning carbon dioxide into valuable products
Professor Ariel Furst (center), undergraduate Rachel Ahlmark (left), postdoc Gang Fan
 (right), and their colleagues are employing biological materials, including DNA, to achieve
 the conversion of carbon dioxide to valuable products. Credits: Gretchen Ertl

Carbon dioxide (CO2) is a major contributor to climate change and a significant product of many human activities, notably industrial manufacturing. A major goal in the energy field has been to chemically convert emitted CO2 into valuable chemicals or fuels. But while CO2 is available in abundance, it has not yet been widely used to generate value-added products. Why not?

The reason is that CO2 molecules are highly stable and therefore not prone to being chemically converted to a different form. Researchers have sought materials and device designs that could help spur that conversion, but nothing has worked well enough to yield an efficient, cost-effective system.

Two years ago, Ariel Furst, the Raymond (1921) and Helen St. Laurent Career Development Professor of Chemical Engineering at MIT, decided to try using something different—a material that gets more attention in discussions of biology than of chemical engineering. Already, results from work in her lab suggest that her unusual approach is paying off.

The stumbling block

The challenge begins with the first step in the CO2 . Before being transformed into a useful product, CO2 must be chemically converted into carbon monoxide (CO). That conversion can be encouraged using electrochemistry, a process in which  provides the extra energy needed to make the stable CO2 molecules react. The problem is that achieving the CO2-to-CO conversion requires large energy inputs—and even then, CO makes up only a small fraction of the products that are formed.

To explore opportunities for improving this process, Furst and her research group focused on the electrocatalyst, a material that enhances the rate of a chemical reaction without being consumed in the process. The catalyst is key to successful operation. Inside an electrochemical device, the catalyst is often suspended in an aqueous (water-based) solution. When an electric potential (essentially a voltage) is applied to a submerged , dissolved CO2 will—helped by the catalyst—be converted to CO.

But there's one stumbling block: The catalyst and the CO2 must meet on the surface of the electrode for the reaction to occur. In some studies, the catalyst is dispersed in the solution, but that approach requires more catalyst and isn't very efficient, according to Furst. "You have to both wait for the diffusion of CO2 to the catalyst and for the catalyst to reach the electrode before the reaction can occur," she explains. As a result, researchers worldwide have been exploring different methods of "immobilizing" the catalyst on the electrode.

Connecting the catalyst and the electrode

Before Furst could delve into that challenge, she needed to decide which of the two types of CO2 conversion catalysts to work with: the traditional solid-state catalyst or a catalyst made up of small molecules. In examining the literature, she concluded that small-molecule catalysts held the most promise. While their conversion efficiency tends to be lower than that of solid-state versions, molecular catalysts offer one important advantage: They can be tuned to emphasize reactions and products of interest.

Two approaches are commonly used to immobilize small-molecule catalysts on an electrode. One involves linking the catalyst to the electrode by strong covalent bonds—a type of bond in which atoms share electrons; the result is a strong, essentially permanent connection. The other sets up a non-covalent attachment between the catalyst and the electrode; unlike a covalent bond, this connection can easily be broken.

Neither approach is ideal. In the former case, the catalyst and electrode are firmly attached, ensuring efficient reactions; but when the activity of the catalyst degrades over time (which it will), the electrode can no longer be accessed. In the latter case, a degraded catalyst can be removed; but the exact placement of the small molecules of the catalyst on the electrode can't be controlled, leading to an inconsistent, often decreasing, catalytic efficiency—and simply increasing the amount of catalyst on the electrode surface without concern for where the molecules are placed doesn't solve the problem.

What was needed was a way to position the small-molecule catalyst firmly and accurately on the electrode and then release it when it degrades. For that task, Furst turned to what she and her team regard as a kind of "programmable molecular Velcro": deoxyribonucleic acid, or DNA.

Adding DNA to the mix

Mention DNA to most people, and they think of biological functions in living things. But the members of Furst's lab view DNA as more than just genetic code. "DNA has these really cool physical properties as a biomaterial that people don't often think about," she says. "DNA can be used as a molecular Velcro that can stick things together with very high precision."

Furst knew that DNA sequences had previously been used to immobilize molecules on surfaces for other purposes. So she devised a plan to use DNA to direct the immobilization of catalysts for CO2 conversion.

Her approach depends on a well-understood behavior of DNA called hybridization. The familiar DNA structure is a double helix that forms when two complementary strands connect. When the sequence of bases (the four building blocks of DNA) in the individual strands match up, hydrogen bonds form between complementary bases, firmly linking the strands together.

Using that behavior for catalyst immobilization involves two steps. First, the researchers attach a single strand of DNA to the electrode. Then they attach a complementary strand to the catalyst that is floating in the . When the latter strand gets near the former, the two strands hybridize; they become linked by multiple hydrogen bonds between properly paired bases. As a result, the catalyst is firmly affixed to the electrode by means of two interlocked, self-assembled DNA strands, one connected to the electrode and the other to the catalyst.

Better still, the two strands can be detached from one another. "The connection is stable, but if we heat it up, we can remove the secondary strand that has the catalyst on it," says Furst. "So we can de-hybridize it. That allows us to recycle our electrode surfaces—without having to disassemble the device or do any harsh chemical steps."

Experimental investigation

To explore that idea, Furst and her team—postdocs Gang Fan and Thomas Gill, former graduate student Nathan Corbin Ph.D. '21, and former postdoc Amruta Karbelkar—performed a series of experiments using three small-molecule catalysts based on porphyrins, a group of compounds that are biologically important for processes ranging from enzyme activity to oxygen transport. Two of the catalysts involve a synthetic porphyrin plus a metal center of either cobalt or iron. The third catalyst is hemin, a natural porphyrin compound used to treat porphyria, a set of disorders that can affect the nervous system. "So even the small-molecule catalysts we chose are kind of inspired by nature," comments Furst.

In their experiments, the researchers first needed to modify single strands of DNA and deposit them on one of the electrodes submerged in the solution inside their electrochemical cell. Though this sounds straightforward, it did require some new chemistry. Led by Karbelkar and third-year undergraduate researcher Rachel Ahlmark, the team developed a fast, easy way to attach DNA to electrodes. For this work, the researchers' focus was on attaching DNA, but the "tethering" chemistry they developed can also be used to attach enzymes (protein catalysts), and Furst believes it will be highly useful as a general strategy for modifying carbon electrodes.

Once the single strands of DNA were deposited on the electrode, the researchers synthesized complementary strands and attached to them one of the three catalysts. When the DNA strands with the catalyst were added to the solution in the electrochemical cell, they readily hybridized with the DNA strands on the electrode. After half-an-hour, the researchers applied a voltage to the electrode to chemically convert CO2 dissolved in the solution and used a gas chromatograph to analyze the makeup of the gases produced by the conversion.

The team found that when the DNA-linked catalysts were freely dispersed in the solution, they were highly soluble—even when they included small-molecule catalysts that don't dissolve in water on their own. Indeed, while porphyrin-based catalysts in solution often stick together, once the DNA strands were attached, that counterproductive behavior was no longer evident.

The DNA-linked catalysts in solution were also more stable than their unmodified counterparts. They didn't degrade at voltages that caused the unmodified catalysts to degrade. "So just attaching that single strand of DNA to the catalyst in solution makes those catalysts more stable," says Furst. "We don't even have to put them on the electrode surface to see improved stability." When converting CO2 in this way, a stable catalyst will give a steady current over time. Experimental results showed that adding the DNA prevented the catalyst from degrading at voltages of interest for practical devices. Moreover, with all three catalysts in solution, the DNA modification significantly increased the production of CO per minute.

Allowing the DNA-linked catalyst to hybridize with the DNA connected to the electrode brought further improvements, even compared to the same DNA-linked catalyst in solution. For example, as a result of the DNA-directed assembly, the catalyst ended up firmly attached to the electrode, and the catalyst stability was further enhanced. Despite being highly soluble in aqueous solutions, the DNA-linked catalyst molecules remained hybridized at the surface of the electrode, even under harsh experimental conditions.

Immobilizing the DNA-linked catalyst on the electrode also significantly increased the rate of CO production. In a series of experiments, the researchers monitored the CO production rate with each of their catalysts in solution without attached DNA strands—the conventional setup—and then with them immobilized by DNA on the electrode. With all three catalysts, the amount of CO generated per minute was far higher when the DNA-linked catalyst was immobilized on the electrode.

In addition, immobilizing the DNA-linked catalyst on the electrode greatly increased the "selectivity" in terms of the products. One persistent challenge in using CO2 to generate CO in aqueous solutions is that there is an inevitable competition between the formation of CO and the formation of hydrogen. That tendency was eased by adding DNA to the catalyst in solution—and even more so when the catalyst was immobilized on the electrode using DNA. For both the cobalt-porphyrin catalyst and the hemin-based catalyst, the formation of CO relative to hydrogen was significantly higher with the DNA-linked catalyst on the electrode than in solution. With the iron-porphyrin catalyst they were about the same. "With the iron, it doesn't matter whether it's in solution or on the electrode," Furst explains. "Both of them have selectivity for CO, so that's good, too."

Progress and plans

Furst and her team have now demonstrated that their DNA-based approach combines the advantages of the traditional solid-state catalysts and the newer small-molecule ones. In their experiments, they achieved the highly efficient chemical conversion of CO2 to CO and also were able to control the mix of products formed. And they believe that their technique should prove scalable: DNA is inexpensive and widely available, and the amount of  required is several orders of magnitude lower when it's immobilized using DNA.

Based on her work thus far, Furst hypothesizes that the structure and spacing of the small molecules on the electrode may directly impact both catalytic efficiency and product selectivity. Using DNA to control the precise positioning of her small-molecule catalysts, she plans to evaluate those impacts and then extrapolate design parameters that can be applied to other classes of energy-conversion catalysts. Ultimately, she hopes to develop a predictive algorithm that researchers can use as they design electrocatalytic systems for a wide variety of applications.Self-healing catalyst films for hydrogen production

More information: Gang Fan et al, DNA-based immobilization for improved electrochemical carbon dioxide reduction (2022). DOI: 10.26434/chemrxiv-2022-qll2k

Provided by Massachusetts Institute of Technology 

This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.



Space junk threat: Researchers working to reduce impact of falling debris

Building smarter could help a lot.


This NASA graphic depicts the amount of space junk orbiting Earth. The debris field is based on data from NASA's Orbital Debris Program Office. Image released on May 1, 2013.
 (Image credit: NASA's Goddard Space Flight Center/JSC)

New research points to increasing pollution in the upper atmosphere due to reentering satellites, rocket bodies and other incoming spaceflight flotsam.

That troublesome trend is leading researchers to come up with mitigation proposals, one of which would use fabrication concepts to reduce the size of space debris pieces that make it to Earth.

The Aerospace Corporation, a California-based nonprofit company, has established a Space Safety Institute (SSI). One of SSI's five focus areas is launch and reentry safety, and "Design for Demise" is an item the institute is paying attention to.

Related: Kessler Syndrome and the space debris problem

"The goal for Design-for-Demise concepts is to use materials and design concepts in the construction of space hardware so that the hardware will demise in a controllable fashion during reentry," said William Ailor of The Aerospace Corporation. "Ideally, the hardware would be reduced to a cloud of very small particles that would not be harmful to people on the ground or to an aircraft."


Ailor said that analysis of debris recovered after reentries show that certain low-melting-point materials such as aluminum will, unless shielded by other materials, totally demise if exposed to reentry heating. So, use of these materials for external and core structures will help assure that interior components will be released quickly and exposed to a high heating environment.

"Components made of higher melting point materials such as stainless steel and titanium will survive intense heating and have been recovered on the ground," Ailor added. "Designs that expose high-melting-point materials to high temperatures directly for long periods will help."

That said, Ailor explained that the best solution to minimize hazards is to purposefully de-orbit space hardware into "safe" regions where there are no people and aircraft at the time of reentry.























Graphic showing launch and reentry particle emissions in Earth's stratosphere. (Image credit: The Aerospace Corporation)


Space hardware fallout


On the other hand, there's the feel-good observation that 75% of the Earth is covered by ocean waters. Consequently, most surviving space clutter merely tumbles harmlessly into watery graves. But there are incidents of space hardware fallout making it to terra firma.

A recent report in Nature Astronomy called "Unnecessary risks created by uncontrolled rocket reentries(opens in new tab)" analyzed three decades of data to produce a "casualty expectation" — the risk to human life from the uncontrolled reentry of rocket bodies into Earth's atmosphere.

Michael Byers of the University of British Columbia and his colleagues explain that, under current practices, there is a roughly 10% chance that one or more casualties will occur over the next decade from falling rocket debris. Furthermore, without multilateral agreements for mandated controlled rocket body reentries, launcher nations will export these risks pointlessly, creating danger for people on the surface.

In another recent study(opens in new tab), published in the journal Earth's Future, researchers from University College London (UCL), the University of Cambridge and the Massachusetts Institute of Technology assessed the impact of rocket launches and space debris on stratospheric ozone and global climate.

Eloise Marais, a study co-author, is an associate professor of physical geography at UCL. She told Space.com that, even if a reentering rocket were to completely burn up, it would produce the air pollutant nitric oxide (NOx) from the high temperatures.

"NOx contributes to depletion of ozone in the stratosphere, where ozone protects us from harmful ultraviolet radiation from the sun. There are also likely other pollutants formed from combustion of the rocket material, such as alumina (Al2O3) that also depletes ozone and at a faster rate than NOx," Marais said. "In essence, we need more sustainable solutions for dealing with space debris."


Chinese version of Russian roulette


Several wearisome cases in point involve China's Long March 5B rocket, which most recently launched the Wentian lab module to the Tiangong space station, which is still under construction.

After Long March 5B launches, the rocket's core stage reaches orbit and then plays a Chinese version of Russian roulette, zooming around Earth for a week or so before finally falling uncontrolled to Earth.

That's a departure from pretty much every other orbital rocket, whose first stages ditch into the ocean or over unpopulated land before reaching orbit — or, in the case of SpaceX's Falcon 9 and Falcon Heavy rockets, land vertically for future reuse.

"It seems that the Chinese have entered 'defense mode,' basically repeatedly saying that the reentry performed by the Long March 5B core stage 'was common practice in the space industry' with little casualty risks," said aerospace engineer Jean Deville, a close observer of China's space program who co-produces the website Dongfang Hour.

"We don't know why a safer de-orbiting method wasn't considered during the design phase," Deville told Space.com. "Was the risk considered truly acceptable? Or was there a technical or a development timeline limitation at the time?"

Deville notes that Zhao Lijian, China's foreign ministry spokesperson, has stated that the Long March 5B "adopted a specific technical design, making most components burn up during reentry."

"While we're lacking specifics," Deville said, "could this be hinting at some effort during the design process to make sure that most of the materials used would burn up during reentry?

Whatever the case, Deville said there are several more Long March 5B launches coming up. In October, for example, China plans to launch the Mengtian lab module to meet up with Tiangong, wrapping up the station's assembly phase. Also, in early 2024, another Long March 5B will hurl into space the large Xuntian space telescope, Deville said.

That booster class could possibly launch a couple more times at the end of the decade, Deville said, as several high-ranking space program leaders have suggested a possible expansion of Tiangong.

"Lots of open questions, for which I don't have answers," Deville concluded.

Leonard David is author of the book "Moon Rush: The New Space Race," published by National Geographic in May 2019. A longtime writer for Space.com, David has been reporting on the space industry for more than five decades. Follow us on Twitter @Spacedotcom or on Facebook
Fact Check: Does Viral Post Really Show 'Clearest Image of Venus'?

BY ED BROWNE ON 9/5/22
NEWSWEEK


An old viral image of the surface of the planet Venus was making the rounds on Reddit on Monday morning.

While the image is no doubt a stunning vision of the hostile, crushing conditions on one of Earth's closest planetary neighbors, it is not quite what it seems.

Another processed image of the planet Venus taken by the Magellan probe that orbited the planet from 1990 to 1994. The planet is known for its hot surface.
SSV/MIPL/MAGELLAN TEAM/NASA


The Claim


On the evening of September 4 a Reddit user posted the below image of the surface of Venus to the subreddit r/interestingasf***, along with the caption: "Clearest image of the surface of Venus. From Soviet Venera 13 in 1982."

By Monday morning the post proved popular, having received more than 42,000 upvotes as of 7:30 a.m. ET.

It is not the first time similar posts have been made. The exact same photo, edited to include the caption "This is the only clear photo ever taken from the surface of Venus", was posted by multiple people to Twitter and Facebook in 2021.



The Facts


It is true that the image really does show the surface of Venus from the perspective of the Venera-13 lander—a Soviet-built spacecraft that landed on Venus' surface on March 1, 1982. However, the image has been heavily processed.

The Soviet Union's extensive Venera program saw many such probes being sent to the planet in order to gather information. Today the program is regarded as pivotal to our current understanding of Venus.

None of the probes lasted long, however, due to the extremely harsh surface conditions of Venus. There, temperatures are about 900 degrees Fahrenheit and the air pressure is more than 90 times that of Earth.

Venera-13 survived for just over two hours, in which time it successfully transmitted a number of images back to Earth.

U.S. researcher Don P. Mitchell, formerly of Princeton University and Microsoft Research, has investigated Soviet exploration of Venus and the original Venera image data. He did not create the color image seen in the social media posts, but he did construct a similar black-and-white one.

On his website, Mitchell states that the original Soviet versions of the images included full panoramas, both in color and black-and-white. However, image quality for the color images was poorer than the black-and-white ones.

Mitchell therefore combined the two types of panoramas together, and processed them so that they appeared as normal landscape images rather than panoramas.



"The Venera panoramas are spherical projections," Mitchell states on his website. "They can be remapped to perspective projections and overlayed (using Adobe Photoshop CS2) to produce views that give a better subjective impression of the Venusian surface."

He added in a 2019 tweet that missing pieces from the panoramas were "filled by duplicates and reversed duplicates". The following tweet thread gives an insight into the production process.

So some of what can be seen in the photos is not necessarily an accurate representation of the lander's surroundings. What's more, NASA states on its website that the true color of the Venera images is "difficult to judge because the Venerian atmosphere filters out blue light."

The original black and white panorama images can be seen below.

A black-and-white version of the Venera-13 panorama photos of Venus' surface that were later extensively processed. The lander touched down on March 1, 1982.
UNION OF SOVIET SOCIALIST REPUBLICS

News agency AFP reached out to Mitchell in February 2019 regarding social media claims about the Venera images. He said: "I created the [black and white] images. The colorized images are done by other people, and in my opinion are not very accurate."

Newsweek has reached out to Mitchell for comment.


The Ruling



Needs Context.


The accomplishments of the scientists behind the Venera-13 probe should not be understated; the probe took the first color pictures ever transmitted from Venus.

However, the image currently being shared as "the clearest image of the surface of Venus" is heavily processed and has been described as "not very accurate" by a researcher who has studied the data extensively.

FACT CHECK BY NEWSWEEK


 


Scientists Uncover New Physics in the Search for Dark Matter

Abstract Astrophysics Dark Matter Mystery

An estimated 85% of the universe’s mass is thought to be made up of dark matter, a hypothetical form of matter.


No, scientists still have no idea what dark matter is. However, MSU scientists helped discover new physics while searching for it.

Wolfgang “Wolfi” Mittig and Yassid Ayyad began their search for dark matter—also referred to as the missing mass of the universe—in the heart of an atom around three years ago.

Even though their exploration did not uncover dark matter, the scientists nonetheless discovered something that had never been seen before that defied explanation. Well, at least an explanation on which everyone could agree.

“It’s been something like a detective story,” said Mittig, a Hannah Distinguished Professor in Michigan State University’s Department of Physics and Astronomy and a faculty member at the Facility for Rare Isotope Beams, or FRIB.

“We started out looking for dark matter and we didn’t find it,” he said. “Instead, we found other things that have been challenging for theory to explain.”

In order to make their finding make sense, the team went back to work, conducting further tests and accumulating more data. Mittig, Ayyad, and their colleagues reinforced their argument at Michigan State University’s National Superconducting Cyclotron Laboratory or NSCL.

The researchers discovered a new route to their unanticipated destination while working at NSCL, which they revealed in the journal Physical Review Letters. Additionally, they revealed intriguing physics at work in the ultra-small quantum realm of subatomic particles.

The scientists showed, in particular, that even when an atom’s center, or nucleus, is overcrowded with neutrons, it can find a route to a more stable configuration by spitting out a proton instead.

Shot in the dark

Dark matter is one of the most well-known yet least understood things in the universe. Scientists have known for decades that the universe contains more mass than we can perceive based on the motions of stars and galaxies.

Six times as much unseen mass as regular matter that we can see, measure, and classify is required for gravity to hold celestial objects to their courses. Although researchers are certain that dark matter exists, they have yet to find where and devise how to detect it directly

“Finding dark matter is one of the major goals of physics,” said Ayyad, a nuclear physics researcher at the Galician Institute of High Energy Physics, or IGFAE, of the University of Santiago de Compostela in Spain.

Speaking in round numbers, scientists have launched about 100 experiments to try to illuminate what exactly dark matter is, Mittig said.

“None of them has succeeded after 20, 30, 40 years of research,” he said.

“But there was a theory, a very hypothetical idea, that you could observe dark matter with a very particular type of nucleus,” said Ayyad, who was previously a detector systems physicist at NSCL.

This theory centered on what it calls a dark decay. It posited that certain unstable nuclei, nuclei that naturally fall apart, could jettison dark matter as they crumbled.

So Ayyad, Mittig, and their team designed an experiment that could look for a dark decay, knowing the odds were against them. But the gamble wasn’t as big as it sounds because probing exotic decays also lets researchers better understand the rules and structures of the nuclear and quantum worlds.

The researchers had a good chance of discovering something new. The question was what that would be.

Help from a halo

When people imagine a nucleus, many may think of a lumpy ball made up of protons and neutrons, Ayyad said. But nuclei can take on strange shapes, including what are known as halo nuclei.

Beryllium-11 is an example of a halo nuclei. It’s a form, or isotope, of the element beryllium that has four protons and seven neutrons in its nucleus. It keeps 10 of those 11 nuclear particles in a tight central cluster. But one neutron floats far away from that core, loosely bound to the rest of the nucleus, kind of like the moon ringing around the Earth, Ayyad said.

Beryllium-11 is also unstable. After a lifetime of about 13.8 seconds, it falls apart by what’s known as beta decay. One of its neutrons ejects an electron and becomes a proton. This transforms the nucleus into a stable form of the element boron with five protons and six neutrons, boron-11.

But according to that very hypothetical theory, if the neutron that decays is the one in the halo, beryllium-11 could go an entirely different route: It could undergo a dark decay.

In 2019, the researchers launched an experiment at Canada’s national particle accelerator facility, TRIUMF, looking for that very hypothetical decay. And they did find a decay with unexpectedly high probability, but it wasn’t a dark decay.

It looked like the beryllium-11’s loosely bound neutron was ejecting an electron like normal beta decay, yet the beryllium wasn’t following the known decay path to boron.

The team hypothesized that the high probability of the decay could be explained if a state in boron-11 existed as a doorway to another decay, to beryllium-10 and a proton. For anyone keeping score, that meant the nucleus had once again become beryllium. Only now it had six neutrons instead of seven.

“This happens just because of the halo nucleus,” Ayyad said. “It’s a very exotic type of radioactivity. It was actually the first direct evidence of proton radioactivity from a neutron-rich nucleus.”

But science welcomes scrutiny and skepticism, and the team’s 2019 report was met with a healthy dose of both. That “doorway” state in boron-11 did not seem compatible with most theoretical models. Without a solid theory that made sense of what the team saw, different experts interpreted the team’s data differently and offered up other potential conclusions.

“We had a lot of long discussions,” Mittig said. “It was a good thing.”

As beneficial as the discussions were — and continue to be — Mittig and Ayyad knew they’d have to generate more evidence to support their results and hypothesis. They’d have to design new experiments.

The NSCL experiments

In the team’s 2019 experiment, TRIUMF generated a beam of beryllium-11 nuclei that the team directed into a detection chamber where researchers observed different possible decay routes. That included the beta decay to proton emission process that created beryllium-10.

For the new experiments, which took place in August 2021, the team’s idea was to essentially run the time-reversed reaction. That is, the researchers would start with beryllium-10 nuclei and add a proton.

Collaborators in Switzerland created a source of beryllium-10, which has a half-life of 1.4 million years, that NSCL could then use to produce radioactive beams with new reaccelerator technology. The technology evaporated and injected the beryllium into an accelerator and made it possible for researchers to make a highly sensitive measurement.

When beryllium-10 absorbed a proton of the right energy, the nucleus entered the same excited state the researchers believed they discovered three years earlier. It would even spit the proton back out, which can be detected as a signature of the process.

“The results of the two experiments are very compatible,” Ayyad said.

That wasn’t the only good news. Unbeknownst to the team, an independent group of scientists at Florida State University had devised another way to probe the 2019 result. Ayyad happened to attend a virtual conference where the Florida State team presented its preliminary results, and he was encouraged by what he saw.

“I took a screenshot of the Zoom meeting and immediately sent it to Wolfi,” he said. “Then we reached out to the Florida State team and worked out a way to support each other.”

The two teams were in touch as they developed their reports, and both scientific publications now appear in the same issue of Physical Review Letters. And the new results are already generating a buzz in the community.

“The work is getting a lot of attention. Wolfi will visit Spain in a few weeks to talk about this,” Ayyad said.

An open case on open quantum systems

Part of the excitement is because the team’s work could provide a new case study for what is known as open quantum systems. It’s an intimidating name, but the concept can be thought of like the old adage, “nothing exists in a vacuum.”

Quantum physics has provided a framework to understand the incredibly tiny components of nature: atoms, molecules, and much, much more. This understanding has advanced virtually every realm of physical science, including energy, chemistry, and materials science.

Much of that framework, however, was developed considering simplified scenarios. The super small system of interest would be isolated in some way from the ocean of input provided by the world around it. In studying open quantum systems, physicists are venturing away from idealized scenarios and into the complexity of reality.

Open quantum systems are literally everywhere, but finding one that’s tractable enough to learn something from is challenging, especially in matters of the nucleus. Mittig and Ayyad saw potential in their loosely bound nuclei and they knew that NSCL, and now FRIB could help develop it.

NSCL, a National Science Foundation user facility that served the scientific community for decades, hosted the work of Mittig and Ayyad, which is the first published demonstration of the stand-alone reaccelerator technology. FRIB, a U.S. Department of Energy Office of Science user facility that officially launched on May 2, 2022, is where the work can continue in the future.

“Open quantum systems are a general phenomenon, but they’re a new idea in nuclear physics,” Ayyad said. “And most of the theorists who are doing the work are at FRIB.”

But this detective story is still in its early chapters. To complete the case, researchers still need more data and more evidence to make full sense of what they’re seeing. That means Ayyad and Mittig are still doing what they do best and investigating.

“We’re going ahead and making new experiments,” said Mittig. “The theme through all of this is that it’s important to have good experiments with strong analysis.”

Reference: “Evidence of a Near-Threshold Resonance in 11B Relevant to the β-Delayed Proton Emission of 11Be” Y. Ayyad, W. Mittig, T. Tang, B. Olaizola, G. Potel, N. Rijal, N. Watwood, H. Alvarez-Pol, D. Bazin, M. Caamaño, J. Chen, M. Cortesi, B. Fernández-Domínguez, S. Giraud, P. Gueye, S. Heinitz, R. Jain, B. P. Kay, E. A. Maugeri, B. Monteagudo, F. Ndayisabye, S. N. Paneru, J. Pereira, E. Rubino, C. Santamaria, D. Schumann, J. Surbrook, L. Wagner, J. C. Zamora and V. Zelevinsky, 1 June 2022, Physical Review Letters.
DOI: 10.1103/PhysRevLett.129.012501

NSCL was a national user facility funded by the National Science Foundation, supporting the mission of the Nuclear Physics program in the NSF Physics Division.

A new explanation for the reddish north pole of Pluto's moon Charon

A new explanation for the reddish north pole of Pluto’s moon Charon
Credit: Pablo Carlos Budassi, CC BY-SA 4.0 , via Wikimedia Commons

A trio of researchers at Purdue University has developed a new theory to explain why Pluto's moon Charon has a reddish north pole. In their paper published in the journal Nature Communications, Stephanie Menten, Michael Sori and Ali Bramson, describe their study of the reddish surfaces of many icy objects in the Kuiper Belt, and how they might relate to Charon's reddish pole.

Prior research has shown that many icy objects in the Kuiper belt are partly or entirely covered in reddish brown material. Prior research has also shown that the material is a kind of tholin—compounds that are formed when organic chemicals are showered with radiation. But that has raised the question of where the  may have come from. In this new effort, the researchers theorize that it comes from methane released from cryovolcanoes.

To test their theory, the researchers turned to Pluto's moon Charon, whose north pole is covered with tholin. They note that prior research suggests that gases escaping from Pluto are responsible for the reddish pole. But prior research has also shown that the  was once covered with a  that contained many materials, including methane.

As the ocean froze, the methane would have become trapped in the ice, the researchers note. They note also that as the water became pressurized, cracks would have formed, leading to occasional eruptions. Such cryovolcanic eruptions, they suggest, could have released some amount of methane gas. And if some of that methane gas managed to drift all the way to the north pole, it would have frozen and fallen to the surface. And if it fell to the surface, it would have been subjected to millions of years of radiation from the sun, making it turn red.

The researchers created simulations of methane molecules drifting around in the Charon atmosphere, calculating how much  could have escaped under such a scenario and how much might have made it to the . They found that approximately 1000 billion metric tons of the gas could have made it to northern pole—more than enough to create a red cap.

Scientists identify a possible source for Charon's red cap
More information: Stephanie M. Menten et al, Endogenically sourced volatiles on Charon and other Kuiper belt objects, Nature Communications (2022). DOI: 10.1038/s41467-022-31846-8
Journal information: Nature Communications    
© 2022 Science X Network
One of the largest solar storms ever detected just erupted on the far side of the sun

It was "no run-of-the-mill event."

By Tereza Pultarova published  SPACE.COM
A giant coronal mass ejection burst from the sun toward Venus on Sept. 5, 2022.
 (Image credit: NASA/STEREO)

Earth's sister planet Venus is experiencing a bout of extreme space weather this week after a giant sunspot, not visible from Earth, expelled an enormous plasma burst toward the scorching-hot planet.

On Monday (Sept. 5), NASA's STEREO-A(opens in new tab) sun-watching spacecraft spotted a coronal mass ejection(opens in new tab) (CME), a cloud of charged particles erupting from the upper layer of the sun's atmosphere(opens in new tab), the corona, emerge from behind the sun(opens in new tab), SpaceWeather.com(opens in new tab) reported.

The CME is the second to have hit Venus(opens in new tab) in a week; another one erupted from the sun on Wednesday (Aug. 30) and reached the planet three days later, just as the European Solar Orbiter spacecraft flew by(opens in new tab).

Related: Solar Orbiter to look at Venus' magnetic field as it swings by the planet

Georgo Ho, a solar physicist at the Johns Hopkins University Applied Physics Laboratory, told SpaceWeather.com that the latest eruption was "no run-of-the-mill event."

"I can safely say the Sept. 5th event is one of the largest (if not THE largest) Solar Energetic Particle (SEP) storms that we have seen so far since Solar Orbiter launched in 2020," Ho, who is one of the lead investigators of the Energetic Particle Detector Instrument aboard Solar Orbiter, told SpaceWeather.com(opens in new tab). "It is at least an order of magnitude stronger than the radiation storm from last week's CME."

The team operating the magnetometer instrument aboard the spacecraft, however, tweeted(opens in new tab) that the CME "appears to have largely missed" Solar Orbiter, although the spacecraft was affected by the energetic particles it delivered.

"There was ... a very large number of energetic particles from this event and [the magnetometer] experienced 19 'single event upsets' in its memory yesterday," the magnetometer team said in the tweet. "[The Solar Orbiter magnetometer] is robust to radiation: it automatically corrected the data as designed and operated nominally throughout."

Ho added that the energetic intensity of the charged particles around the spacecraft "has not subsided since the beginning of the storm."

"This is indicative of a very fast and powerful interplanetary shock, and the inner heliosphere may be filled with these high-energy particles for a long time. I think I've only seen a couple of these in the last couple solar cycles," Ho told SpaceWeather.com(opens in new tab). (The heliosphere is the huge bubble of charged particles and magnetic fields that the sun blows around itself.)

Satellites can disappear in major solar storms and it could take weeks to find them

The source of the powerful eruption is believed to be the sunspot region AR3088, which crossed the Earth-facing side of the sun's disk in August and has likely grown into a much more powerful beast since disappearing from Earth's view.

Due to the sun's rotation, the sunspot will face our planet again next week, SpaceWeather.com said, which means Earth, too, may be up for some space weather(opens in new tab) activity soon.

Solar Orbiter was built to measure such events, so scientists can hardly complain about the battering. As Ho told SpaceWeather.com, "many science papers will be studying this [event] for years to come."

Originally published on Space.com.