Friday, March 13, 2026

 

Why do lithium-ion batteries fail? Scientists find clues in microscopic metal 'thorns'





New Jersey Institute of Technology
Fractured dendrite 

image: 

Brittle, microscopic structures called dendrites form in lithium-ion batteries and can disrupt battery performance. Unlike bulk lithium, which is pliant and supple, dendrites fracture under stress.

view more 

Credit: Courtesy of the Lou Group/Rice University




For the first time, scientists have observed how tiny metal "thorns" called dendrites sprout inside lithium-ion batteries, which can cause the batteries to short-circuit. Their findings, published Mar. 12 in the journal Science, shed light on previously unknown mechanical properties of lithium dendrites as they grow.

Scientists have long studied lithium dendrites, but did not fully understand how these structures behave inside batteries. Dendrites form at the nanoscale; their growth is challenging to observe in the closed system of a working battery, but has been linked to battery decline and failure.

The new study, an international collaboration between researchers from universities in the U.S. and Singapore, combined experiments and simulations to provide a first glimpse of how dendrites crystalize, says co-lead author Xing Liu, an assistant professor of mechanical and industrial engineering at New Jersey Institute of Technology and director of NJIT's Computational Mechanics and Physics Lab.

"This work reflects a close collaboration between experimental and computational mechanics," and could help improve battery safety, he says. 

Co-lead author Qing Ai, a former research scientist at Rice University, adds: "Despite decades of study, the fundamental nanomechanical properties of lithium dendrites remained a mystery ⎯ until now."

Customized platforms

Measuring about 100 times smaller than the width of a human hair, lithium dendrites (from the Latin word for "branch") grow from anodes — negative terminals in lithium-ion batteries. Dendrites' branches can penetrate into a lithium cell's electrolyte; if dendrites extend from the negatively-charged anode to the positively-charged cathode, they can short out the battery.

"Lithium dendrites are widely recognized as one of the biggest obstacles to the commercialization of lithium-metal batteries," Liu says. "During battery operation, lithium dendrites can form, break, and become electrically isolated from the lithium metal anode, creating what is known as 'dead lithium.' This process leads to a gradual loss of battery capacity over time. In addition, dendrites can penetrate the separator and create an internal short circuit between the anode and cathode. Both capacity loss and short-circuit risks associated with dendrites are commonly observed in lab studies."

What's more, lithium dendrites are near-impossible to remove from a battery once they form.

"At present, there is no practical method to 'clear' dendrites from a working battery cell," Liu adds.

For the new study, researchers at Rice University and collaborators at Georgia Institute of Technology, the University of Houston and the Nanyang Technological University in Singapore harvested dendrites from working batteries to test their mechanical strength. 

"To enable the quantitative study of lithium dendrites, we developed customized sample preparation and mechanical characterization platforms for such delicate work," says Boyu Zhang, a Rice doctoral alum and co-lead author on the study. 

Co-corresponding author Jun Lou, Rice's Karl F. Hasselmann Professor of Materials Science and Nanoengineering, led a team at the Nanomaterials, Nanomechanics and Nanodevices lab in directly probing the mechanical behavior of dendrites as they formed in real batteries. Ai and Zhang, both former members of Lou's lab, performed the extremely delicate experiments with support from study co-corresponding author Hua Guo and co-author Wenhua Guo of the Rice University Shared Equipment Authority.

To conduct the experiments, they constructed air-tight platforms for preparing and studying samples, as lithium is highly reactive and undergoes chemical and structural changes when exposed to even small quantities of air. High-resolution electron microscopy then revealed how individual dendrites deform in response to controlled stresses. 

"Like dry spaghetti"

Lithium in bulk is supple and squishy; lithium dendrites were therefore expected to be similarly pliant. However, the experiments showed otherwise. The University of Houston team, led by co-corresponding author Yan Yao, a professor in the Department of Electrical and Computer Engineering, observed dendrites breaking in real time during battery operation, providing evidence for dendrite brittleness in both liquid and solid electrolyte systems.

"Lithium dendrites have long been assumed to be soft and ductile, like Play-Doh," Liu says. "But our observations suggest that they may instead be strong and brittle — snapping more like dry spaghetti."

Teams at NJIT and Georgia Tech then contributed modeling and theoretical analysis of data from the observations. 

"We conducted scale-bridging simulations to explain why lithium dendrites behave differently from previously thought," Liu explains.

They found that as dendrites form in a battery cell, a thin layer of solid electrolyte interphase, or SEI, encases them. The SEI coating makes dendrites rigid and needlelike, capable of piercing battery cells' separators and electrolytes and prone to snapping under stress, accumulating in the battery cell as fragments of dead lithium and contributing to battery failure.

"Understanding the underlying physics provides new insights into how to make dendrites less prone to brittle fracture — for example, by using lithium alloy anodes," Liu explains. For researchers who study computational mechanics, mechanisms such as those observed in the study — how structures deform and what makes them shatter and fail — are like musical notes which can be incorporated into a "symphony" of high-performance materials and high-energy storage systems.

"The strengthening mechanism we identified in lithium dendrites adds a new note to this composition," Liu says.


Close-up view of the top of the sample transfer box (top door open), showing that the lithium dendrite was transferred using a micromanipulator tip (a sharp silver needle) from the brown copper transmission electron microscopy grids to the Rice micromechanical devices (silver blocks), ready for subsequent testing and characterization.

Credit

Photo courtesy of the Lou Group/Rice University

 

AI is homogenizing human expression and thought, computer scientists and psychologists say





Cell Press





AI chatbots are standardizing how people speak, write, and think. If this homogenization continues unchecked, it risks reducing humanity’s collective wisdom and ability to adapt, computer scientists and psychologists argue in an opinion paper publishing March 11 in the Cell Press journal Trends in Cognitive Sciences. They say that AI developers should incorporate more real-world diversity into large language model (LLM) training sets, not only to help preserve human cognitive diversity, but also to improve chatbots’ reasoning abilities.  

“Individuals differ in how they write, reason, and view the world,” says first author and computer scientist Zhivar Sourati of the University of Southern California. “When these differences are mediated by the same LLMs, their distinct linguistic style, perspective, and reasoning strategies become homogenized, producing standardized expressions and thoughts across users.” 

Within groups and societies, cognitive diversity bolsters creativity and problem solving, say the researchers. However, cognitive diversity is shrinking worldwide as billions of people are using the same handful of AI chatbots for an increasing number of tasks, they say. When people use chatbots to help them polish their writing, for example, the writing ends up losing its stylistic individuality, and people feel less creative ownership over what they produce. 

“The concern is not just that LLMs shape how people write or speak, but that they subtly redefine what counts as credible speech, correct perspective, or even good reasoning,” says Sourati. 

The team points to multiple studies showing that LLM outputs are less varied than human-generated writing and that LLM outputs tend to reflect the language, values, and reasoning styles of Western, educated, industrialized, rich, and democratic societies. 

“Because LLMs are trained to capture and reproduce statistical regularities in their training data, which often overrepresent dominant languages and ideologies, their outputs often mirror a narrow and skewed slice of human experience,” says Sourati. 

Though studies show that individuals often generate more ideas with more details when they use LLMs, groups of people produce fewer and less creative ideas when they use LLMs than when they simply combine their collective powers, note the researchers. 

“Even if people are not the first-hand users of LLMs, LLMs are still going to affect them indirectly,” says Sourati. “If a lot of people around me are thinking and speaking in a certain way, and I do things differently, I would feel a pressure to align with them, because it would seem like a more credible or socially acceptable way of expressing my ideas.”  

Beyond language, studies have shown that after interacting with biased LLMs, people’s opinions become more similar to the LLM that they used. LLMs also favor linear modes of reasoning such as “chain-of-thought reasoning,” which requires models to show step-by-step reasoning. This emphasis reduces the use of intuitive or abstract reasoning styles, which are sometimes more efficient than linear reasoning, the researchers say. They also note that LLMs can alter people’s expectations, which can subtly change the direction of a person’s work. 

“Rather than actively steering generation, users often defer to model-suggested continuations, selecting options that seem ‘good enough’ instead of crafting their own, which gradually shifts agency from the user to the model,” says Sourati. 

The researchers say that AI developers should intentionally incorporate diversity in language, perspectives, and reasoning into their models. They emphasize that this diversity should be grounded in the diversity that exists within humans globally, rather than introducing random variation.  

“If LLMs had more diverse ways of approaching ideas and problems, they would better support the collective intelligence and problem-solving capabilities of our societies,” said Sourati. “We need to diversify the AI models themselves while also adjusting how we interact with them, especially given their widespread use across tasks and contexts, to protect the cognitive diversity and ideation potential of future generations.” 

### 

This research was supported by funding from the Air Force Office of Scientific Research. 

Trends in Cognitive Sciences, Sourati et al., “The homogenizing effect of large language models on human expression and thought” https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(26)00003-3

Trends in Cognitive Sciences (@TrendsCognSci), published by Cell Press, is a monthly review journal that brings together research in psychology, artificial intelligence, linguistics, philosophy, computer science, and neuroscience. It provides a platform for the interaction of these disciplines and the evolution of cognitive science as an independent field of study. Visit: http://www.cell.com/trends/cognitive-sciences. To receive Cell Press media alerts, please contact press@cell.com

 Researchers develop AI tool to predict patients at risk of intimate partner violence


NIH-funded, automated clinical decision support could facilitate timely interventions for at-risk patients years before they might otherwise seek help



NIH/Office of the Director





 

A team of researchers funded by the National Institutes of Health (NIH) have developed an artificial intelligence (AI) tool that provides decision support to clinicians by predicting if patients are at risk of intimate partner violence (IPV). Using data routinely collected during medical visits, the team trained a machine-learning model, a type of AI, that was highly accurate in detecting IPV among patients in a study.  

 

IPV refers to abuse from current or former partners that results in serious effects such as potentially life-threatening injuries, chronic pain and mental health disorders. It affects millions of people in the United States — both men and women — at some point in their lives. However, many cases go undetected, because patients can be hesitant to disclose abusive relationships due to safety concerns, fear and stigma. 

 

In their study, the research team led by researchers from Harvard Medical School, Boston, introduced three AI models for IPV detection in healthcare settings, comparing their performance in predicting it.  

 

“This clinical decision support tool could make a significant impact on prediction and prevention of intimate partner violence,” said Dr. Qi Duan, Ph.D., director of the Division of Health Informatics Technologies at NIH’s National Institute of Biomedical Imaging and Bioengineering (NIBIB). “Given the prevalence of cases, the tool could be a game-changing asset to public health.” 

 

Many cases of IPV go unrecognized, leading to missed opportunities for timely intervention, according to the study authors. They report that current screening tools capture only a fraction of cases, while clinical and imaging records provide valuable information in detecting IPV risk. Notably, radiologists have an advantage in recognizing the signs of IPV, including the frequency of certain patterns of physical trauma.  

 

The researchers used several years of hospital data from nearly 850 affected female patients and 5,200 unaffected age- and demographics-matched control patients. Because the collection of relevant clinical data varies across healthcare settings, the team designed two distinct AI models, one trained on structured patient data, in table form, and another trained on unstructured patient data from medical notes, including radiology reports. Further, they developed a multimodal model that is a fusion of both structured and unstructured data.  

 

All the models achieved a high performance in the study. However, the multimodal fusion model outperformed the models that used either just structured or unstructured data. It performed accurately 88% of the time. Both the tabular model and the fusion model can detect IPV risk on average more than three years before patients enroll at hospital-based domestic abuse intervention centers. While the tabular model achieved slightly earlier recognition of IPV risk, the fusion model was able to detect more IPV cases in advance.   

 

The fusion model achieved more stable performance than relying on either modality alone. The scientists explained that the different modalities are processed separately and only merged at the prediction stage. They found that the tabular framework is particularly relevant in healthcare, where there are variations across different hospitals in data availability and in the recording of unstructured data.  

 

The researchers emphasized that the use of AI tools such as their machine learning models could assist healthcare providers in having timely conversations with patients about IPV and connecting those patients with appropriate support resources. Such AI tools are not intended for making definitive diagnoses.  

 

“For decades, our healthcare system has depended largely on patient self-disclosure to identify intimate partner violence, leaving many cases unrecognized and unsupported,” said Bhati Khurana, M.D., senior author of the study and an emergency radiologist at Mass General Brigham and associate professor of radiology at Harvard Medical School. “Our work represents a fundamental shift from reactive disclosure to proactive risk recognition within routine clinical care. By analyzing patterns already present in healthcare data, this approach supports healthcare clinicians in initiating earlier, safer and more informed conversations with patients.” 

 

According to the researchers, when used in a patient-centered manner, this tool can serve as a key component of a proactive approach to IPV intervention, enabling timely and effective support and ultimately leading to improved long-term health outcomes for at-risk patients. The team developed guidance at the project website to help clinicians thoughtfully approach conversations with patients.  

 

“The goal is never to force disclosure, but to help clinicians communicate with patients in a supportive way and to connect them with resources and support,” Khurana said.  

 

The research team plans to use AI models to develop a decision-support tool embedded in electronic medical record systems to provide real-time IPV risk evaluations in clinical settings.  

 

For more about IPV: About Intimate Partner Violence | Intimate Partner Violence Prevention | CDC 

 

For more about Automated IPV Risk Support: https://bhartikhurana.bwh.harvard.edu/airs/  

 

This research was co-funded by NIBIB grant R01EB032384 and the NIH Office of the Director. 

 

About the National Institute of Biomedical Imaging and Bioengineering (NIBIB): NIBIB’s mission is to improve health by leading the development and accelerating the application of biomedical technologies. The Institute is committed to integrating the physical and engineering sciences with the life sciences to advance basic research and medical care. NIBIB supports emerging technology research and development within its internal laboratories and through grants, collaborations, and training. More information is available at the NIBIB website.   

 

About the National Institutes of Health (NIH): NIH, the nation's medical research agency, includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. NIH is the primary federal agency conducting and supporting basic, clinical, and translational medical research, and is investigating the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs, visit www.nih.gov

 

NIH…Turning Discovery Into Health® 

 

Reference: Gu J, Villalobos Carballo K, Ma Y, Bertsimas D, and Khurana B. Leveraging multimodal machine learning for accurate risk identification of intimate partner violence. Nature Portfolio Journal: Women’s Health. 2026. DOI: 10.1038/s44294-025-00126-3