Monday, April 20, 2026

 

Industrial chemicals delay recovery of the ozone layer



Ozone protection under pressure




Swiss Federal Laboratories for Materials Science and Technology (Empa)



Jungfraujoch 

image: 

The Jungfraujoch high alpine research station is located at 3,580 meters above sea level on a mountain saddle in the central Swiss Alps.

view more 

Credit: Empa





Although ozone-depleting chemicals such as carbon tetrachloride (CCl₄) or certain chlorofluorocarbons (CFCs) are no longer used in refrigerators and foams, they continue to serve as feedstocks in industrial processes for the production of modern refrigerants and plastics. Until now, these so-called feedstock chemicals have flown under the radar of international agreements because the quantities produced and leakage rates were significantly underestimated.

Working with international research groups, Empa researchers have now used global measurements to show that during the production and processing of these substances, approximately three to four percent escapes into the atmosphere through leaks. Furthermore, their use has increased significantly in recent decades. In a study published in Nature Communications, they have now calculated that, as a result, the ozone layer is likely to recover about seven years later than previously assumed – unless emissions are reduced. “These substances are not only ozone-depleting but also highly harmful to the climate. Lower emissions would thus benefit both the ozone layer and the climate,” says Stefan Reimann, an atmospheric scientist at Empa and lead author of the study.

Measurements show higher emissions

When the Montreal Protocol was negotiated in the 1980s and later strengthened, it led to a global ban on ozone-depleting substances in everyday products. Feedstock chemicals, however, were exempt from this ban. At the time, industry assumed that only about 0.5 percent of the quantities produced would escape into the atmosphere and that the use of these substances would decline in the long term. “But this assessment has not been accurate anymore for quite some time,” says Reimann. “Feedstock chemicals are now being released in increased quantities during production, transport, and further processing, and the volumes currently being produced are significantly larger than was assumed 30 years ago.”

These new findings are based on global atmospheric measurements from international networks such as the Advanced Global Atmospheric Gases Experiment (AGAGE), which includes the Empa research station on the Jungfraujoch. Since many ozone-depleting substances remain in the atmosphere for decades, their concentrations allow conclusions to be drawn about global emissions. “We measure the concentrations of these substances in the atmosphere. Based on their lifetimes, we can calculate how much they should actually be decreasing. If they aren’t, emissions must still be occurring,” explains Martin Vollmer, an Empa researcher and co-author of the study.

A comparison of these measurements with the production figures officially reported by individual countries shows that today, an average of three to four percent of the feedstock produced enters the atmosphere – several times the originally assumed values. For carbon tetrachloride, which is particularly harmful to the ozone layer, emission rates are even above four percent.

Why usage is increasing

However, emissions are rising not only because of higher production losses, but also because the overall use of feedstock chemicals is increasing – by about 160 percent since the year 2000. Some of these feedstocks were initially used to produce hydrofluorocarbons (HFCs), which were introduced as refrigerant substitutes following the ban on CFCs. Since these substitutes later proved to be potent greenhouse gases, they are now being phased out under the so-called Kigali Amendment. They are increasingly being replaced by hydrofluoroolefins (HFOs), which have little impact on the climate but whose production again relies heavily on ozone-depleting feedstock chemicals.

Added to this is a rapidly growing use in the polymer industry – for example, in the production of fluoropolymers such as Teflon (PTFE) or polyvinylidene fluoride (PVDF), an important material in lithium-ion batteries for electric cars. “The quantities of feedstock are not decreasing but will continue to grow, at least in the coming years,” says Reimann.

Both the ozone layer and the climate are affected

Based on these developments, the international research team calculated various future scenarios. They compared, for example, the originally assumed, very low emission rates with the values measured today from the use of feedstock chemicals. The established benchmark from 1980, when global ozone depletion was first observed, serves as a reference. Until now, it was assumed that this original state of the ozone layer would be reached again around the year 2066. However, the new calculations show that if feedstock emissions remain at current levels, this timeline will shift by about seven years. The stratospheric ozone layer would therefore not fully recover until around 2073. The margin of uncertainty for this estimate ranges from six to eleven years.

However, the feedstock chemicals released not only damage the ozone layer but also act as powerful greenhouse gases. If nothing changes, these additional climate-damaging emissions will reach around 300 million metric tons of CO₂ equivalents per year by mid-century – comparable to the current annual CO₂ emissions of a country like England or France. Reducing these emissions would therefore have a dual benefit.

Whether these emissions will be reduced in the future through binding emission limits or a targeted restriction of particularly problematic substances is, according to Stefan Reimann, ultimately a political decision. Even though the Montreal Protocol continues to be regarded as one of the greatest successes of international environmental policy, it should be regularly reviewed and, if necessary, adapted in light of new scientific findings. “The Montreal Protocol was successful because science, politics, and industry worked closely together. Such cooperation is crucial again today to address new challenges,” says Reimann.

 

Prompt coaching tool raises user awareness of bias in generative AI systems






Penn State
Inclusive Prompt Coach 

image: 

The inclusive prompt coaching tool developed by a team of Penn State-led researchers warns users about bias in AI systems and suggests a prompt to generate more inclusive content.

view more 

Credit: Penn State





UNIVERSITY PARK, Pa. — A coaching tool built into artificial intelligence (AI)-powered systems may raise user awareness of bias in AI algorithms and help individuals better prompt generative AI tools to produce more inclusive content, according to researchers at Penn State and Oregon State University.

The researchers developed a new text-to-image generative AI application intended to provide immediate media literacy interventions — methods designed to make users pause and reflect on the inclusiveness of their prompt design before image generation. As users enter prompts into the application, the “inclusive prompt coaching” tool issues warnings about biases in generative AI systems and offers suggestions for making their prompts more inclusive. The team presented their research today (April 16) at the 2026 Association of Computing Machinery Computer-Human Interaction Conference on Human Factors in Computing Systems in Barcelona, Spain. The paper received an honorable mention from the conference’s awards committee.

In the study, the researchers found that the inclusive prompt coaching intervention increased users’ awareness of algorithmic bias, or its tendency to produce stereotypical content. It also boosted their confidence in writing inclusive prompts to produce less biased outputs. The intervention also increased users’ perceived trust calibration, or their capability to adjust their trust levels to better reflect the systems’ actual trustworthiness. But the intervention led to a less satisfactory user experience, according to the researchers.

“Oftentimes, media literacy interventions like those for social media occur outside of the medium, informing or warning users about the dangers of social media before or after they’ve interacted with it,” said study co-author S. Shyam Sundar, Evan Pugh University Professor and the James P. Jimirro Professor of Media Effects at Penn State. “Here we are using the medium itself — AI text-to-image generators — to educate users about how to better use the medium while they’re interacting with it. It’s a newer twist on the media literacy approach to address the problem of lack of inclusiveness in generative AI.”

To see if prompt coaching can serve as an effective media literacy intervention, the researchers recruited 344 study participants from an online survey platform. They randomly assigned the participants to one of three study conditions: an inclusive prompt coaching condition; a detailed prompt coaching condition; and no coaching condition. The latter two served as control conditions. The researchers asked participants to use the system to generate an image of any character and then answer questions about their experience using the AI system, such as how much control they felt they had over the tool, their awareness of algorithmic bias and their confidence in their ability to craft effective prompts.

Participants in the inclusive prompt coaching condition received feedback on their prompts as soon as they wrote them. If a participant asked the tool to generate an image of beautiful girls in the forest, it would draw their attention to potential bias by explaining that the prompt reinforces the bias that female beauty is primarily defined by physical appearance, running the risk of objectifying the characters. It would then suggest a more inclusive wording, such as “enchanting individuals in a forest.”

Those who went through this intervention reported higher awareness of algorithmic bias compared to those in the no coaching condition. They also reported a higher perception of being able to craft effective prompts compared to those in the other two conditions. Yet participants in the inclusive and detailed prompt coaching conditions reported a more frustrating user experience compared to those in the no coaching condition.

“We found a positive effect of this new approach on improving peoples’ awareness of algorithmic bias and increasing their confidence in creating effective prompts to reduce bias in AI images,” said first author Cheng “Chris” Chen, assistant professor of emerging media and technology at Oregon State University who completed her doctorate with Sundar at Penn State. “The downside of the current version is that participants perceived it as less helpful and more frustrating compared to the control conditions, but we can address this in future design iterations.”

Participant feedback suggested that there was resentment among users that the AI system was giving them “a slap on the wrist” for not being inclusive, or that it was identifying potential biases in prompts but then generating images with biased components, the researchers explained. They pointed to one example where the system issued a warning and offered a suggestion for an innocent prompt asking for an image of “a cute toad.”

“To address these complaints, we can make the system more context aware and more specifically tailor it to user prompts, because some prompts may be more innocent than others,” Chen said. “More tailored interventions may be able to reduce negative perceptions regarding the user experience, reduce frustrations with the design and improve perceived helpfulness.”

Giving users the option of toggling the system on and off could also address the user experience issues, added Sundar, who is also the director of the Penn State Center for Socially Responsible Artificial Intelligence (CSRAI).

“When you’re asking an AI system to generate an image of a toad, the system should not bother trying to automatically correct your lack of inclusiveness,” he said. “But when you’re dealing with a topic much more in the world of human affairs, the system should realize that you might need help, and that you might appreciate assistance with regard to prompt coaching for inclusiveness.”

The prompt coaching approach could help technology companies make their AI tools more ethical and responsible, which could promote appropriate trust among their users, Chen said.

“For everyday users, the inclusive prompt coaching intervention could provide a moment to pause and reflect on how inclusive their prompt is to elicit the best output from AI,” she said. “We found that the increased thinking, or elaboration, in users’ prompt design led to greater trust and improved perceptions of trust calibration.”

In addition to Sundar and Chen, other study co-authors were Mengqi Liao, assistant professor at the University of Georgia who received her doctorate from Penn State; Penn State master’s students Aditya Anand Phadnis and Yao Li; Andrew High, professor of communication arts and sciences at Penn State; and Saeed Abdullah, associate professor of information sciences and technology at Penn State.

Inclusive Prompt Coach Output 

A team led by Penn State researchers developed an inclusive prompt coaching tool that helped study participants identify bias in AI systems and better prompt generative AI tools to produce more inclusive content. This AI-generated image of a radiant Black woman resulted from a prompt suggested by the tool.

Credit

Penn State

A student-led experiment sets new limits in the search for axions


A study published in JCAP shows how, with limited resources and support from a large experiment, students built an axion detector and helped narrow down the properties of dark matter



Sissa Medialab

The experimental apparatus 

image: 

The experimental apparatus built and used by students at the University of Hamburg

view more 

Credit: Nabil Salama and Agit Akgümüs





In the era of precision cosmology, research often means big science: large observatories, highly complex instruments, international collaborations and substantial funding. Yet even in such an advanced field, progress is still possible — including in the search for elusive dark matter — through more agile approaches, driven by small teams and young researchers, supported by institutions and a good dose of ingenuity.

In a paper just published in the Journal of Cosmology and Astroparticle Physics (JCAP), a group of then-undergraduate students from the University of Hamburg built a cavity detector to search for axions — among the most promising candidates for dark matter — and set new experimental limits on their properties. The result was achieved with relatively limited resources, showing that even small-scale experiments can make a meaningful contribution to one of the most open challenges in modern physics.

Funding for students

The project was made possible through a student research grant provided by the University of Hamburg via the Hub for Crossdisciplinary Learning, which supports independent research initiatives.

“We were kind of embedded in the research group of the MADMAX dark matter experiment,” explains Nabil Salama, one of the authors of the study, currently pursuing an M.Sc. in Physics at the University of Hamburg. “MADMAX carries out a similar experiment on a much larger and more complex scale, and we benefited from their expertise and support.”
“We are very grateful for this help,” he adds, “and also to the University of Hamburg and the Quantum Universe Cluster of Excellence, which provided funding, access to key equipment such as the magnet, and invaluable support from researchers.”

Searching for dark matter

“The benefit of working with dark matter, or axions, is that we expect it to be present everywhere in our galaxy,” says Agit Akgümüs, first author of the study with Salama, currently pursuing an M.Sc. in Mathematical Physics at the University of Hamburg. “So essentially, no matter where you perform the experiment, you have some dark matter on your hand you can do experiments with.”

The funding was first used to build the experimental setup, starting with a resonant cavity made from highly conductive materials, along with the necessary electronics, cabling, supports and measurement instruments. “The detector we built is essentially the simplest version of a cavity detector for dark matter,” says Salama.
The team did not work entirely from scratch: in addition to the funding, they relied on existing infrastructure and equipment provided by the university and collaborating research groups.
The experiment was then tested, calibrated and operated to collect data for analysis.

“We reduced very complex experiments to their essential components,” says Salama. “The result is a less sensitive setup, limited to a small search window, but still capable of producing new scientific data.”

No signal found, new limits set

“The search for axions involves exploring a wide range of possible parameters,” adds Akgümüs. “Our experiment covers only a small region, with limited sensitivity, but it still helps narrow down the possibilities. To actually find the particle, we need either much larger experiments or many different ones, each probing a specific region.”
At the end of the data-taking phase, the team did not observe any signal attributable to axions. Rather than a failure, this is a meaningful scientific result: it allows researchers to exclude the presence of axions with certain properties within the explored mass range, particularly those with stronger interactions with photons. In this way, the study helps narrow the parameter space and guide future searches.

“I think the point of our experiment is that things can be done on a smaller scale,” says Salama. Akgümüs adds: “Our results are naturally more limited than those of larger experiments. Performance scales with resources and complexity. However, we have shown that it is possible to reduce these setups to a much smaller scale — even to projects developed almost independently by students — while still producing real scientific data.”

During the peer-review process of the paper, a referee made a particularly notable comment, Salama recalls. According to the referee, once the axion is discovered and its properties — especially its mass — are known, experiments of this kind could become far more accessible, potentially even suitable for teaching laboratories. “We were told that setups like ours could one day become standard student lab experiments,” says Salama. “In a way, we may have anticipated that future, showing that it is already possible to build and operate such an experiment on a small scale.”

The paper “A New Limit for Axion Dark Matter with SPACE” by M. A. Akgümüs, N. Salama, J. Egge, E. Garutti, M. Maroudas, L. H. Nguyen, and D. Leppla-Weber has been published in the Journal of Cosmology and Astroparticle Physics (JCAP).

Salama (left) and Akgümüs (right) with the experimental apparatus

Credit

Nabil Salama and Agit Akgümüs

 

“Can we hear lost voices again?” Restoring ‘my voice’ by reading light and reviving with AI




Pohang University of Science & Technology (POSTECH)
Schematic diagram illustrating the difference between communication using conventional voice-based methods and the developed silent speech interface. 

image: 

Schematic diagram illustrating the difference between communication using conventional voice-based methods and the developed silent speech interface.

 

view more 

Credit: POSTECH





Hearing words even when spoken in silence—a new technology has been developed that reads the subtle movements of neck muscles using light and employs AI to restore them into actual voices.

 

A research team led by Professor Sung-Min Park (Department of IT Convergence Engineering, Mechanical Engineering, Electrical Engineering, and the Graduate School of Convergence) and Dr. Sunguk Hong (Department of Mechanical Engineering) at POSTECH (Pohang University of Science and Technology) conducted this study. The findings were published in the online edition of Cyborg and Bionic Systems, a Science Partner Journal in the field of biomedical engineering.

 

The research began with tiny changes that occur around the neck when a person speaks. It is not just the vocal cords that create sound. Whenever we speak, the muscles and skin around the neck move together, drawing an invisible "movement map" on the skin. The research team focused on the fact that these microscopic movements contain information about what the person intends to say.

 

To capture this information, the research team developed a ‘Multiaxial Strain Mapping Sensor.’ This sensor, which combines a miniature camera with small reference markers on a soft silicone material, can be conveniently worn on the neck and detects even the most minute skin movements. The wearing position and tightness can be adjusted for the individual, and an algorithm automatically corrects errors that may occur when the device is reattached, allowing it to operate stably in daily environments.

 

The strain patterns collected by the sensor are analyzed by AI. It estimates the words or sentences the user intends to say and combines them with voice synthesis technology trained on the individual's vocal characteristics to reproduce the actual voice. Even without producing sound, it "reads" the speech and converts it into a voice.

 

Existing voice restoration technologies used biological signals such as ‘EMG (electromyography)’ or ‘EEG (electroencephalography),’ but they had limitations in daily life due to complex equipment and uncomfortable wearability. The research team solved this problem with a wearable sensor and confirmed through experiments that speech could be reconstructed with high accuracy even in noisy environments such as factories.

 

The scope of application is also broad. It is expected to be used in various fields, such as communication assistance for patients who have lost their voices due to vocal cord diseases or laryngeal surgery, communication technology for industrial sites without microphones or radios, and even "silent communication" in libraries or conference rooms.

 

Professor Sung-Min Park, who led the study, said, "We hope this technology will accelerate the day when patients with speech disorders can reclaim their voices," adding, "It is a noteworthy technology because it has a wide range of potential applications, including assisting laryngectomized patients, communicating in noisy industrial environments, and even supporting silent conversations.“

 

Meanwhile, this research was conducted with support from Doctoral Course Research Grant Program and the Mid-career Researcher Program of the Ministry of Education, Bio&Medical Technology Development Program and the Pioneering Convergence Science and Technology Development Program of the Ministry of Science and ICT.