Friday, February 06, 2026

 

Ancient rocks reveal evidence of the first continents and crust recycling processes on Earth


New analyses of the planet’s oldest minerals suggest a diversity of tectonic settings not previously expected more than 4 billion years ago




University of Wisconsin-Madison




MADISON — Parts of the ancient Earth may have formed continents and recycled crust through subduction far earlier than previously thought.

New research led by scientists at the University of Wisconsin–Madison has uncovered chemical signatures in zircons, the planet’s oldest minerals, that are consistent with subduction and extensive continental crust during the Hadean Eon, more than 4 billion years ago. The findings challenge models that have long considered Earth's earliest times as dominated by a rigid, unmoving “stagnant lid” and no continental crust, with potential implications for the timing of the origin of life on the planet.  

The study, published Feb. 4 in the journal Nature, is based on chemical analyses of ancient zircons found in the Jack Hills of Western Australia. These sand-sized grains preserve the only direct records of Earth’s first 500 million years and offer rare insight into how the planet’s surface and interior interacted as continents first formed.

The conclusions come from measurements of trace elements within individual zircon grains using the WiscSIMS, a powerful instrument housed on the UW–Madison campus that can analyze microscopic objects one-tenth the diameter of a human hair. The UW team developed new procedures for analysis of certain elements that they couldn’t assess previously. 

These elements are essentially fingerprints of the environments where the zircons formed, allowing the scientists to distinguish zircons that formed in magmas that originated in the Earth's mantle beneath Earth’s crust from those associated with subduction and continental crust. Because zircons lock in their chemistry when they crystallize and are highly resistant to alteration, they preserve uniquely reliable records of early Earth processes, even after several billion years.

“They’re tiny time capsules and they carry an enormous amount of information,” says John Valley, a professor emeritus of geoscience at UW–Madison who led the research.

Valley says that the chemistry of zircons found in the Jack Hills clearly shows that they originated from a much different source than other Hadean zircons found in South Africa, which carry a chemical signature typical of more primitive rocks originating within the Earth's mantle.

“What we found in the Jack Hills is that most of our zircons don’t look like they came from the mantle,” says Valley. “They look like continental crust. They look like they formed above a subduction zone.”

Together, the two groups of zircons suggest that early Earth was not dominated by a single tectonic style, according to Valley. 

“I think the South Africa data are correct, and our data are correct,” Valley says. “That means the Hadean Earth wasn’t covered by a uniform stagnant lid.”

Importantly, the type of subduction that could have produced the Jack Hills zircons is not necessarily the same as in modern plate tectonics. Valley described a process in which mantle plumes of ultra-hot rock rose, partly melted and pooled at the base of the crust, creating circulation that could draw surface materials downward.

“That is subduction,” he says. “It’s not plate tectonics, but you have surface rocks sinking down into the mantle.”

This matters because subduction carries water-rich surface rocks down to hotter depths, where they can cause melting and form magmas that produce granitic rocks.

“If you have material on the surface, the surface had liquid water in the Hadean,” Valley says. “And when you take that material down, it’s wet and dehydrates. The water causes melting and that forms granites.”

Granites and related rocks are fundamental building blocks of continents. They’re less dense than other common rocks found under Earth’s oceans. This creates buoyant continents that rise higher above the ocean basins, providing stable environments on the Earth’s surface.

“This is evidence for the first continents and mountain ranges,” Valley says.

The results suggest that early Earth was geologically diverse, with different tectonic styles operating simultaneously in different regions.

“We can have both a stagnant-lid-like environment and a subduction-like environment operating at the same time, just in different places,” Valley says.

That complexity could reshape how scientists think about the planet’s first billion years, and the implications extend beyond tectonics. Subduction and continent formation influence when dry land first appeared and how surface environments evolved.

“What everybody really wants to know is, when did life emerge?” Valley says. “This doesn’t answer that question, but it says that we had dry land as a viable environment very early on.”

The oldest accepted microfossils are about 3.5 billion years old, but the Jack Hills zircons push evidence for potentially habitable surface conditions much earlier.

“We propose that there was about 800 million years of Earth history where the surface was habitable, but we don’t have fossil-evidence and don’t know when life first emerged on Earth,” Valley says.

As scientists continue to hunt for evidence of what the earliest Earth was like, Valley says the latest results are an example of the power of improving and refining laboratory techniques.

“Our new analytical capabilities opened a window into these amazing samples,” he says. “The Hadean zircons are literally so small you can’t see them without a lens, and yet they tell us about the otherwise unknown story of the earliest Earth.”

This research was supported by the European Research Council under the European Union’s Horizon H2020 research and innovation program (856555) and the National Science Foundation (EAR-2320078, EAR-2136782). 

 

Equity, diversity, and inclusion programs in health care institutions



JAMA Network Open



About The Study:

 In this systematic review and meta-analysis of equity, diversity, and inclusion (EDI) initiatives in health care institutions, programs were associated with an increased workforce diversity. These findings support the continued use of EDI initiatives to promote a more inclusive and equitable health care culture.



Corresponding Author: To contact the corresponding author, Manish M. Sood, MD, MSc, email Msood@toh.on.ca.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

(doi:10.1001/jamanetworkopen.2025.55896)

Editor’s Note: Please see the article for additional information, including other authors, author contributions and affiliations, conflict of interest and financial disclosures, and funding and support.

#  #  #

Embed this link to provide your readers free access to the full-text article 

https://jamanetwork.com/journals/jamanetworkopen/fullarticle/10.1001/jamanetworkopen.2025.55896?guestAccessKey=1b34668e-afe8-4888-aa3d-dd05b3b83eff&utm_source=for_the_media&utm_medium=referral&utm_campaign=ftm_links&utm_content=tfl&utm_term=020426

About JAMA Network Open: JAMA Network Open is an online-only open access general medical journal from the JAMA Network. On weekdays, the journal publishes peer-reviewed clinical research and commentary in more than 40 medical and health subject areas. Every article is free online from the day of publication. 

 

In a study, AI model OpenScholar synthesizes scientific research and cites sources as accurately as human experts





University of Washington




Keeping up with the latest research is vital for scientists, but given that millions of scientific papers are published every year, that can prove difficult. Artificial intelligence systems show promise for quickly synthesizing seas of information, but they still tend to make things up, or “hallucinate.” 

For instance, when a team led by researchers at the University of Washington and The Allen Institute for AI, or Ai2, studied a recent OpenAI model, GPT-4o, they found it fabricated 78-90% of its research citations. And general-purpose AI models like ChatGPT often can’t access papers that were published after their training data was collected. 

So the UW and Ai2 research team built OpenScholar, an open-source AI model designed specifically to synthesize current scientific research. The team also created the first large, multi-domain benchmark for evaluating how well models can synthesize and cite scientific research. In tests, OpenScholar cited sources as accurately as human experts, and 16 scientists preferred its response to those written by subject experts 51% of the time. 

The team published its findings Feb. 4 in Nature. The project’s code, data and a demo are publicly available and free to use.

“After we started this work, we put the demo online and quickly, we got a lot of queries, far more than we’d expected,” said senior author Hannaneh Hajishirzi, a UW associate professor in the Paul G. Allen School of Computer Science & Engineering and senior director at Ai2. “When we started looking through the responses we realized our colleagues and other scientists were actively using OpenScholar. It really speaks to the need for this sort of open-source, transparent system that can synthesize research.”

Researchers trained the model and then created a set of 45 million scientific papers for OpenScholar to pull from to ground its answers in established research. They coupled this with a technique called "retrieval-augmented generation,” which lets the model search for new sources, incorporate them and cite them after it’s been trained. 

“Early on we experimented with using an AI model with Google’s search data, but we found it wasn’t very good on its own,” said lead author Akari Asai, a research scientist at Ai2 who completed this research as a UW doctoral student in the Allen School. “It might cite some research papers that weren’t the most relevant, or cite just one paper, or pull from a blog post randomly. We realized we needed to ground this in scientific papers. We then made the system flexible so that it could incorporate emerging research through results.” 

To test their system, the team created ScholarQABench, a benchmark against which to test systems on scientific search. They gathered 3,000 queries and 250 longform answers written by experts in computer science, physics, biomedicine and neuroscience. 

“AI is getting better and better at real world tasks,” Hajishirzi said. “But the big question ultimately is whether we can trust that its answers are correct.”

The team compared OpenScholar against other state-of-the-art AI models, such as OpenAI’s GPT-4o and two models from Meta. ScholarQABench automatically evaluated AI models’ answers on metrics such as their accuracy, writing quality and relevance. 

OpenScholar outperformed all the systems it was tested against. The team had 16 scientists review answers from the models and compare them with human-written responses. The scientists preferred OpenScholar answers to human answers 51% of the time, but when they combined OpenScholar citation methods and pipelines with GPT-4o (a much bigger model), the scientists preferred the AI written answers to human answers 70% of the time. They picked answers from GPT-4o on its own only 32% of the time.

“Scientists see so many papers coming out every day that it’s impossible to keep up,” Asai said. “But the existing AI systems weren’t designed for scientists’ specific needs. We’ve already seen a lot of scientists using OpenScholar and because it’s open-source, others are building on this research and already improving on our results. We’re working on a followup model, DR Tulu, which builds on OpenScholar’s findings and performs multi-step search and information gathering to produce more comprehensive responses.” 

Other co-authors include Jacqueline HeRulin ShaoWeijia Shi, all UW doctoral students in the Allen School; Dan Weld, a UW professor emeritus in the Allen School and general manager and chief scientist at Ai2; Varsha Kishore, a UW postdoc in the Allen School and postdoc at Ai2; Luke Zettlemoyer, a UW professor in the Allen School; Pang Wei Koh, a UW assistant professor in the Allen School; Amanpreet Singh, Joseph Chee Chang, Kyle Lo, Luca Soldaini, Sergey Feldman, Mike D’Arcy, David Wadden, Matt Latzke, Jenna Sparks and Jena D. Hwang of Ai2; Wen-tau Yih of Meta; Minyang Tian, Shengyan Liu, Hao Tong and Bohao Wu of University of Illinois Urbana-Champaign; Pan Ji of University of North Carolina; Yanyu Xiong of Stanford University; and Graham Neubig of Carnegie Mellon University.

For more information, contact Asai at akaria@allenai.org and Hajishirzi at hannaneh@cs.washington.edu.

 

'Discovery learning' AI tool predicts battery cycle life with just a few days' data



A 'learner,' 'interpreter' and 'oracle' work together with minimal experiments to draw parallels between historical data and new battery designs




University of Michigan





Illustrations of discovery learning process

An agentic AI tool for battery researchers harnesses data from previous battery designs to predict the cycle life of new battery concepts. With information from just 50 cycles, the tool—developed at University of Michigan Engineering—can predict how many charge-discharge cycles the battery can undergo before its capacity drops below 90 percent of its design capacity.

 

This could save months to years of testing, depending on the conditions of cycling experiments, as well as substantial electrical power during battery prototyping and testing. The team estimates that the cycle lives of new battery designs could be predicted with just 5% of the energy and 2% of the time required by conventional testing.

 

"When we learn from the historical battery designs, we leverage physics-based features to construct a generalizable mapping between early-stage tests and cycle life," said Ziyou Song, U-M assistant professor of electrical and computer engineering and corresponding author of the study in Nature. "We can minimize experimental efforts and achieve accurate prediction performance for new battery designs."

 

The study was funded by the battery company Farasis Energy USA in California, which also provided battery cells and data from its design and testing to assess how well the model—trained only on free, public data—performed.

 

The tool is inspired by a teaching approach known as discovery learning, or learning by doing. A student learning in this way has a problem to solve and resources to help discover the solution, while drawing on their own experiences and prior knowledge. Over the course of solving many problems, the student no longer needs the resources to solve similar ones—they have internalized the knowledge and skills. 

 

"Discovery learning is a general machine-learning approach that may be extended to other scientific and engineering domains," said Jiawei Zhang, U-M doctoral candidate in electrical and computer engineering and the first author of this study, who had the initial inspiration to design a team of AI agents that could simulate this mode of learning.

 

How the AI discovery learning tool works

 

The team designated an AI "learner" that would predict the cycle life for a given battery design and cycling conditions, such as temperature and current. The learner chooses a few battery candidates that would fill gaps in its knowledge, to be built and run for about 50 cycles. The results of those experiments flow to an "interpreter," which accesses historical data and runs calculations with a physics-based battery simulator. The "oracle" then makes cycle life predictions for the experimental batteries based on the historical data and calculations provided. 

 

Finally, the learner combines the new information with previous predictions to estimate the cycle life of the new battery design. Even with experiments, the discovery learning system provides huge time and energy savings, with the potential to improve further as the learner accumulates enough knowledge to make predictions without running the discovery loop.

 

Next-gen lithium-ion batteries are very different from previous iterations—in chemistry, structure and materials—but the team argues that there are parallels among them that may help predict how new designs will perform. Rather than using simple statistical features from current and voltage signals, the interpreter leverages underlying physical properties to establish commonalities among different batteries.

 

With this information in hand, the oracle considers the battery in two ways: its internal characteristics—information from the interpreter about the physics and chemistry of the cell—and its operating conditions. For instance, at higher temperatures, a particular chemical change may dominate how the battery is likely to degrade, but that mechanism is less important at lower temperatures.

 

The team tested out their model with data and pouch cells from Farasis Energy USA. After training on a data set that included only cylindrical cells, similar to the familiar AA battery, the model could predict the performance of these larger cells. While full tests run to 1,000 cycles and can take a few months to years, 50-cycle tests take only a few days to weeks, according to the team's estimates. Testing required fewer cells, as well as fewer cycles, resulting in energy savings of about 95%.

 

Within battery technology, the team intends to expand the approach to other areas of performance, such as safety and charging speed. However, as discovery learning is a new scientific machine-learning approach, the team believes that others could build similar predictive tools or develop new approaches to optimization. They hope it could speed development in many disciplines bottlenecked by the need for expensive experiments, most immediately in chemistry and material design.

 

Researchers from the National University of Singapore also contributed to the study.

 

Study: Discovery learning predicts battery cycle life from minimal experiments (DOI: 10.1038/s41586-025-09951-7)