Saturday, April 05, 2025

 

Mechanistic understanding could enable better fast-charging batteries


University of Wisconsin-Madison





MADISON — Fast-charging lithium-ion batteries are ubiquitous, powering everything from cellphones and laptops to electric vehicles. They’re also notorious for overheating or catching fire.

Now, with an innovative computational model, a University of Wisconsin–Madison mechanical engineer has gained new understanding of a phenomenon that causes lithium-ion batteries to fail.

Developed by Weiyu Li, an assistant professor of mechanical engineering at UW–Madison, the model explains lithium plating, in which fast charging triggers metallic lithium to build up on the surface of a battery’s anode, causing the battery to degrade faster or catch fire.

This knowledge could lead to fast-charging lithium-ion batteries that are safer and longer-lasting.

The mechanisms that trigger lithium plating, until now, have not been well understood. With her model, Li studied lithium plating on a graphite anode in a lithium-ion battery. The model revealed how the complex interplay between ion transport and electrochemical reactions drives lithium plating. She detailed her results in a paper published on March 10, 2025, in the journal ACS Energy Letters.

“Using this model, I was able to establish relationships between key factors, such as operating conditions and material properties, and the onset of lithium plating,” Li says. “From these results, I created a diagram that provides physics-based guidance on strategies to mitigate plating. The diagram makes these findings very accessible, and researchers can harness the results without needing to perform any additional simulations.”

Researchers can use Li’s results to design not only the best battery materials — but importantly, charging protocols that extend battery life.

“This physics-based guidance is valuable because it enables us to determine the optimal way to adjust the current densities during charging, based on the state of charge and the material properties, to avoid lithium plating,” Li says.

Previous research on lithium plating has mainly focused on extreme cases. Notably, Li’s model provides a way to investigate the onset of lithium plating over a much broader range of conditions, enabling a more comprehensive picture of the phenomenon.

Li plans to further develop her model to incorporate mechanical factors, such as stress generation, to explore their impact on lithium plating.

# # #

--Adam Malecek, acmalecek@wisc.edu

 

UNM scientists discover how nanoparticles of toxic metal used in MRI scans infiltrate human tissue




University of New Mexico Health Sciences Center




University of New Mexico researchers studying the health risks posed by gadolinium, a toxic rare earth metal used in MRI scans, have found that oxalic acid, a molecule found in many foods, can generate nanoparticles of the metal in human tissues.

In a new paper published in the journal Magnetic Resonance Imaging, a team led by Brent Wagner, MD, professor in the Department of Internal Medicine in the UNM School of Medicine, sought to explain the formation of the nanoparticles, which have been associated with serious health problems in the kidneys and other organs.

“The worst disease caused by MRI contrast agents is nephrogenic systemic fibrosis,” he said. “People have succumbed after just a single dose.” The condition can cause a thickening and hardening of the skin, heart and lungs and cause painful contracture of the joints.

Gadolinium-based contrast agents are injected prior to MRI scans to help create sharper images, Wagner said. The metal is usually tightly bound to other molecules and is excreted from the body, and most people experience no adverse effects. However, previous research has shown that even in those with no symptoms, gadolinium particles have been found in the kidney and the brain and can be detected in the blood and urine years after exposure.

Scientists are left with intertwined puzzles: Why do some people get sick, when most don’t, and how do gadolinium particles become pried loose from the other molecules in the contrast agent?

“Almost half of the patients had been exposed only a single time, which means that there’s something that is amplifying the disease signal,” Wagner said. “This nanoparticle formation might explain a few things. It might explain why there's such an amplification of the disease. When a cell is trying to deal with this alien metallic nanoparticle within it, it's going to send out signals that tell the body to respond to it.”

In their study, Wagner’s team focused on oxalic acid, which is found in many plant-based foods, including spinach, rhubarb, most nuts and berries and chocolate, because it binds with metal ions. The process helps lead to the formation of kidney stones, which result when oxalate binds with calcium. Meanwhile, oxalic acid also forms in the body when people eat foods or supplements containing vitamin C.

In test tube experiments the researchers found that oxalic acid caused minute amounts of gadolinium to precipitate out of the contrast agent and form nanoparticles, which then infiltrated the cells of various organs.

“Some people might form these things, while other do not, and it may be their metabolic milieu,” Wagner said. “It might be if they were in a high oxalic state or a state where molecules are more prone to linking to the gadolinium, leading to the formation of the nanoparticles. That might be why some individuals have such awful symptoms and this massive disease response, whereas other people are fine.”

The finding points to a possible way to mitigate some of the risks associated with MRI scan, he said.

“I wouldn't take vitamin C if I needed to have an MRI with contrast because of the reactivity of the metal,” Wagner said. “I'm hoping that we're getting closer to some recommendations for helping these individuals.”

The team is now researching ways to identify those who might be at greatest risk from gadolinium contrast agents. In a new study they’re building an international patient registry that will include a collection of blood, urine, fingernail and hair samples, which could provide evidence of gadolinium accumulation in the body.

“We want to get a lot more information to come up with the risk factors that relate to those with symptoms,” he said. “We’re going to ask about what medical conditions you had at the time of exposure, what medications are you on, and we want to include dietary supplements, because that might piece it all together – why some people have symptoms, whereas others seem to be impervious.”

 

Fetal alcohol spectrum disorders in children may be underestimated




University of Gothenburg
Valdemar Landgren 

image: 

Valdemar Landgren, Sahlgrenska Academy at the University of Gothenburg.

view more 

Credit: Photo by University of Gothenburg




Out of 206 fourth-grade students, 19 met criteria for fetal alcohol spectrum disorders, according to a pilot study at the University of Gothenburg. The results indicate that birth defects caused by alcohol consumption during pregnancy may be as common in Sweden as in several other European countries.

The study ran at six schools in western Sweden and constituted an add-on to the regular health check-up for all fourth-grade students. The participants underwent a physical examination, review of medical records and psychological tests of memory, attention, and problem-solving ability. Parents and teachers described the children's behavior and school performance, and the mothers were interviewed about their dietary habits and alcohol consumption during pregnancy.

Larger study needed

Of the 206 participants examined, fetal alcohol spectrum disorders (FASD) were found in 19 children. Ten had alcohol-related neurobehavioral disorder, four had partial fetal alcohol syndrome, and five had the most severe variant, fetal alcohol syndrome (FAS). The overall prevalence of FASD in the study group was 5.5 percent, of which 2.4 percent concerned FAS.

Leading author in the study is Valdemar Landgren, psychiatrists and researcher at Sahlgrenska Academy and Gillberg Neuropsychiatry Centre at the University of Gothenburg:

"Conducting the study in school as an add-on to the regular health check-up proved feasible. Our study is small, so a large-scale national study is needed to obtain a fuller picture. If the results are replicated, it would indicate that Sweden is on a par with many other European countries", he says.

Few diagnosed

There are no prior studies investigating the prevalence of FASD in Sweden. According to nationwide statistics from Sweden's National Board of Health and Welfare, only about 60 children receive such a diagnosis each year.

Leading author in the study is Valdemar Landgren, psychiatrists and researcher at Sahlgrenska Academy and Gillberg Neuropsychiatry Centre at the University of Gothenburg:

"Today, these conditions are rarely diagnosed in Swedish healthcare. One reason may be that physicians don't assess for conditions of which they are unaware or believe to be very rare. Empirical knowledge about the actual prevalence is of importance for medical education and diagnostics, and for society to be able to work preventively," says Valdemar Landgren.

 

How can science benefit from AI?


Publication by the University of Bonn warns of misunderstandings in handling predictive algorithms



University of Bonn

Prof. Dr. Jürgen Bajorath 

image: 

from Life Science Informatics at the University of Bonn. 

view more 

Credit: Photo: University of Bonn




Researchers from chemistry, biology, and medicine are increasingly turning to AI models to develop new hypotheses. However, it is often unclear on which basis the algorithms come to their conclusions and to what extent they can be generalized. A publication by the University of Bonn now warns of misunderstandings in handling artificial intelligence. At the same time, it highlights the conditions under which researchers can most likely have confidence in the models. The study has now been published in the journal Cell Reports Physical Science.

Adaptive machine learning algorithms are incredibly powerful. Nevertheless, they have a disadvantage: How machine learning models arrive at their predictions is often not apparent from the outside.

Suppose you feed artificial intelligence with photos of several thousand cars. If you now present it with a new image, it can usually identify reliably whether the picture also shows a car or not. But why is that? Has it really learned that a car has four wheels, a windshield, and an exhaust? Or is its decision based on criteria that are actually irrelevant – such as the antenna on the roof? If this were the case, it could also classify a radio as a car.

AI models are black boxes

“AI models are black boxes,” highlights Prof. Dr. Jürgen Bajorath. “As a result, one should not blindly trust their results and draw conclusions from them.” The computational chemistry expert heads the AI in Life Sciences department at the Lamarr Institute for Machine Learning and Artificial Intelligence. He is also in charge of the Life Science Informatics program at the Bonn-Aachen International Center for Information Technology (b-it) at the University of Bonn. In the current publication, he investigated the question of when one can most likely rely on the algorithms. And vice versa: When not.

The concept of “explainability” plays an important role in this context. Metaphorically speaking, this refers to efforts within AI research to drill a peephole into the black box. The algorithm should reveal the criteria that it uses as a basis – the four wheels or the antenna. “Opening the black box currently is a central topic in AI research,” says Bajorath. “ Some AI models are exclusively developed to make the results of others more comprehensible.”

Explainability, however, is only one aspect – the question of which conclusions might be drawn from the decision-making criteria chosen by a model is equally important. If the algorithm indicates that it has based its decision on the antenna, a human being knows immediately that this feature is poorly suited for identifying cars. Despite this, adaptive models are generally used to identify correlations in large data sets that humans might not even notice. We are then like aliens who do not know what makes a car: An alien would be unable to say whether or not an antenna is a good criterion.

Chemical language models suggest new compounds

“There is another question that we always have to ask ourselves when using AI procedures in science,” stresses Bajorath, who is also a member of the Transdisciplinary Research Area (TRA) “Modelling”: “How interpretable are the results?” Chemical language models currently are a hot topic in chemistry and pharmaceutical research. It is possible, for instance, to feed them with many molecules that have a certain biological activity. Based on these input data, the model then learns and ideally suggests a new molecule that also has this activity but a new structure. This is also referred to as generative modeling. However, the model can usually not explain why it comes to this solution. It is often necessary to subsequently apply explainable AI methods.

Nonetheless, Bajorath warns against over-interpreting these explanations, that is, anticipating that features the AI considers important indeed cause the desired activity. “Current AI models understand essentially nothing about chemistry,” he says. “They are purely statistical and correlative in nature and pay attention to any distinguishing features, regardless of whether these features might be chemically or biologically relevant or not.” In spite of this, they may even be right in their assessment – so perhaps the suggested molecule has the desired capabilities. The reasons for this, however, can be completely different from what we would expect based on chemical knowledge or intuition. For evaluating potential causality between features driving predictions and outcomes of corresponding natural processes, experiments are typically required: The researchers must synthesize and test the molecule, as well as other molecules with the structural motif that the AI considers important.

Plausibility checks are important

Such tests are time-consuming and expensive. Bajorath thus warns against over-interpreting the AI results in the search for scientifically plausible causal relationships. In his view, a plausibility check based on a sound scientific rationale is of critical importance: Can the feature suggested by explainable AI actually be responsible for the desired chemical or biological property? Is it worth pursuing the AI’s suggestion? Or is it a likely artifact, a randomly identified correlation such as the car antenna, which is not relevant at all for the actual function?

The scientist emphasizes that the use of adaptive algorithms fundamentally has the potential to substantially advance research in many areas of science. Nevertheless, one must be aware of the strengths of these approaches – and particularly of their weaknesses.

Publication: Jürgen Bajorath: From Scientific Theory to Duality of Predictive Artificial Intelligence Models; Cell Reports Physical Science; DOI: 10.1016/j.xcrp.2025.102516, Internet: https://www.sciencedirect.com/science/article/pii/S2666386425001158


From explaining predictions to capturing causal relationships. 

Credit

Image: Jürgen Bajorath/University of Bonn

 

Energy giants back key CCUS breakthrough research



Heriot-Watt University

Facebook

Members of the specialist research team working in the lab at Heriot-Watt University 

image: 

The specialist Hydrate, Flow Assurance and Phase Equilibria (HFAPE) research group at Heriot-Watt University

view more 

Credit: Heriot-Watt Universi



Scientists from Heriot-Watt University have secured new funding to investigate the thermodynamic behaviour of typical carbon capture, utilisation, and storage (CCUS) fluids. This research is critical for the safe and efficient processing, transportation, and storage of these fluids.

The two-year project aims to improve thermodynamic models to predict the phase behaviour of CO2 rich mixtures, specifically focusing on volatile organic compounds (VOCs) as the impurities.  The project outcomes will be pivotal in establishing optimum operational conditions throughout the CCUS chain as well as environmental compliance and proper CO2 storage.

In CCUS systems, VOCs are often found in the captured CO2 stream, primarily originating from the source of the CO2. VOCs include, for example, benzene, toluene, xylene (BTX), aldehydes (formaldehyde, acetaldehyde), and various hydrocarbons depending on the fuel source and capture conditions.  

Jointly funded by TotalEnergies and Equinor, the new research project builds on Heriot-Watt University’s long-standing expertise in CCUS research. Since the institution’s first CCUS related joint industry project (JIP) in 2011, led by Professor Antonin Chapoy, a specialist research group has developed advanced laboratories and cutting-edge expertise in experimental and modelling studies of the thermophysical properties of CCUS fluids. Today, the group collaborates with more than ten major CCUS operators worldwide through consultancy and research projects.

Dr Pezhman Ahmadi, project lead, is from the specialist Hydrate, Flow Assurance and Phase Equilibria (HFAPE) research group at Heriot-Watt University. He emphasised the importance of this research:

"For safety and technical reasons, understanding the thermodynamic behaviour of a fluid is key to its successful processing, transportation, and storage. In CCUS projects, where the working fluid is usually a CO2 rich mixture, the presence of impurities significantly influences the behaviour of the fluid in comparison to a pure CO2 stream. While thermodynamic models for pure CO2 are reliable thanks to abundant experimental data, impure CO2 streams, which are common in industry, pose challenges due to limited data and deficiencies in existing models. This project focuses on VOCs as a critical category of impurities so we can better understand the influence of this type of impurities and address this data gap."

Heriot-Watt’s Professor Antonin Chapoy, project co-lead, has extensive experience in leading CCUS projects for the research group. He added: "Our modelling studies, underpinned by experimental capabilities and expertise, provide precise thermodynamic models that improve the safety, technical and economic aspects of CCUS operations. These models help reduce operational risks, such as hydrate or dry ice formation, and minimise costs while enhancing efficiency in the transportation and storage of CO2-rich fluids. Over the years, our work has supported major CCUS operators in achieving safer and more cost-effective operations."

The group's expertise was recently showcased through its involvement in the Northern Lights project, Norway's pioneering carbon storage initiative that opened in September 2024. The technical contributions made by this group of researchers were critical in ensuring the safe transportation and storage of CO2, with the team providing essential data on fluid behaviour under varying conditions.

Professor Chapoy continues: "Our contributions to CCS projects and our extensive expertise underscore the importance of understanding thermodynamic properties of CCUS fluids for the long-term success of decarbonisation projects. With 14 years of focused research on this topic, our team continues to develop practical solutions to accelerate industry’s net-zero transition. This new project exemplifies our commitment to supporting global decarbonisation efforts. We are grateful for the support of TotalEnergies and Equinor in driving this critical research forward."

Ends

CRIMINAL CRYPTO CAPITALI$M

Mathematicians uncover the hidden patterns behind a $3.5 billion cryptocurrency collapse


The study reveals coordinated attack behind TerraUSD crash


Queen Mary University of London




In a new study published in ACM Transactions on the Web, researchers from Queen Mary University of London have unveiled the intricate mechanisms behind one of the most dramatic collapses in the cryptocurrency world: the downfall of the TerraUSD stablecoin and its associated currency, LUNA. Using advanced mathematical techniques and cutting-edge software, the team has identified suspicious trading patterns that suggest a coordinated attack on the ecosystem, leading to a catastrophic loss of $3.5 billion in value virtually overnight. 

The study, led by Dr Richard Clegg and his team, employs temporal multilayer graph analysis — a sophisticated method for examining complex, interconnected systems over time. This approach allowed the researchers to map the relationships between different cryptocurrencies traded on the Ethereum blockchain, revealing how the TerraUSD stablecoin was destabilised by a series of deliberate, large-scale trades. 

Stablecoins like TerraUSD are designed to maintain a steady value, typically pegged to a fiat currency like the US dollar. However, in May 2022, TerraUSD and its sister currency, LUNA, experienced a catastrophic collapse. Dr Clegg’s research sheds light on how this happened, uncovering evidence of a coordinated attack by traders who were betting against the system, a practice known as "shorting." 

“What we found was extraordinary,” says Dr Clegg. “On the days leading up to the collapse, we observed highly unnatural trading patterns. Instead of the usual spread of transactions across hundreds of traders, we saw a handful of individuals controlling almost the entire market. These patterns are the smoking gun evidencing of a deliberate attempt to destabilise the system.” 

The team’s analysis revealed that on key dates, just five or six traders accounted for nearly all the trading activity, with each controlling almost exactly the same share of the market. This level of coordination is virtually impossible by chance in a normal trading environment and strongly suggests that these individuals were working together to trigger the collapse. 

The research not only provides insights into the TerraUSD collapse but also introduces a powerful new tool for analysing cryptocurrency markets. The team’s software, developed in collaboration with Pometry a spin-out company from Queen Mary University, uses graph network analysis to visualise and interpret complex trading data. This tool could prove invaluable for regulators, investors, and researchers seeking to understand and mitigate risks in the volatile world of cryptocurrency. 

“Cryptocurrencies are often seen as the Wild West of finance, with little oversight and even less accountability,” says Dr Clegg. “Our work shows that by applying rigorous mathematical techniques, we can uncover the hidden patterns and behaviours that drive these markets. This isn’t just about understanding what went wrong in the past — it’s about building a safer, more transparent financial system for the future.” 

The implications of this research extend far beyond the world of cryptocurrency. The methods developed by Dr Clegg and his team could be applied to a wide range of complex systems, from financial markets to social networks. For regulatory agencies, this work offers a new way to monitor and safeguard against systemic risks, protecting both individual investors and the broader economy.