Thursday, September 25, 2025

Rice anthropologist among first to use AI to uncover new clues that early humans were prey, not predators Were early humans hunters — or hunted?




Rice University
Leopard bite marks embedded in skull 

image: 

Fossil evidence showing leopard bite marks embedded in a hominin skull.

view more 

Credit: Manuel Domínguez-Rodrigo


Were early humans hunters — or hunted?

For decades, researchers believed that Homo habilis — the earliest known species in our genus — marked the moment humans rose from prey to predators. They were thought to be the first stone tool users and among the earliest meat eaters and hunters based on evidence from early archaeological sites.

But fossils of another early human species — African Homo erectus — show they lived alongside H. habilis about 2 million years ago. That raised a new mystery: Which of these two species was actually making tools and eating the meat of hunted animals? Most anthropologists long suspected H. habilis was responsible, which would have placed them in a dominant predatory role. 

New findings from a team led by Rice University anthropologist Manuel Domínguez-Rodrigo, in  partnership between Rice and the Archaeological and Paleontological Museum of Madrid through the Institute of Evolution in Africa (IDEA), which he co-directs with Enrique Baquedano, challenge that view, revealing that these early humans were still preyed upon by carnivores, likely leopards. The work is published in the Annals of the New York Academy of Sciences.

“We discovered that these very early humans were eaten by other carnivores instead of mastering the landscape at that time,” Domínguez-Rodrigo said.

The breakthrough was made possible by applying artificial intelligence (AI) to fossil analysis, giving researchers insights they could not have reached with traditional methods alone. Domínguez-Rodrigo is among the first anthropologists to use AI for taxon-specific analysis of bone surface damage — training computer vision models to recognize the microscopic tooth mark patterns left by different predators.

“Human experts have been good at finding modifications on prehistoric bones,” he said. “But there were too many carnivores at that time. AI has opened new doors of understanding.”

His team trained deep learning models to distinguish bone damage left by leopards, lions, hyenas, crocodiles and wolves. When the models analyzed marks on H. habilis fossils from Olduvai Gorge in Tanzania, they consistently identified leopard bite marks with high confidence.

“AI is a game changer,” Domínguez-Rodrigo said. “It’s pushing methods that have been stable for 40 years beyond what we imagined. For the first time, we can pinpoint not just that these humans were eaten but by whom.”

The finding challenges a long-standing idea about when and what type of humans began to dominate their environment, showing that even as their brains were beginning to grow, they were still vulnerable.

“The beginning of the human brain doesn’t mean we mastered everything immediately,” Domínguez-Rodrigo said. “This is a more complex story. These early humans, these Homo habilis, were not the ones responsible for that transformation.”

He said it’s a reminder that human evolution wasn’t a single leap from prey to predator but a long, gradual climb and that H. habilis may not have been the turning point researchers once believed.

Domínguez-Rodrigo added that the methods developed for this study could unlock discoveries across anthropology, allowing researchers to analyze other early human fossils in new ways. The work is part of a growing collaboration between Rice and IDEA, where his team is based.

“This is a pioneer center in the use of artificial intelligence to the past,” he said. “It’s one of the first places using AI for paleontological and anthropological research.” 

Domínguez-Rodrigo said he hopes this discovery is just the beginning. By applying AI to other fossils, he believes researchers can map when humans truly rose from prey to predator and uncover new chapters in our evolutionary story that have long been hidden.

“It’s extremely stimulating to be the first one to see something for the first time. When you uncover sites that have been hidden from the human eye for more than 2 million years, you’re contributing to how we reconstruct who we are. It’s a privilege and very encouraging.”

The study was co-authored by Marina Vegara Riquelme and Enrique Baquedano, and was supported by the Spanish Ministry of Science and Innovation, the Spanish Ministry of Universities and the Spanish Ministry of Culture.

AI system learns from many types of scientific information and runs experiments to discover new materials



The new “CRESt” platform could help find solutions to real-world energy problems that have plagued the materials science and engineering community for decades.




Massachusetts Institute of Technology





Machine-learning models can speed up the discovery of new materials by making predictions and suggesting experiments. But most models today only consider a few specific types of data or variables. Compare that with human scientists, who work in a collaborative environment and consider experimental results, the broader scientific literature, imaging and structural analysis, personal experience or intuition, and input from colleagues and peer reviewers.

Now, MIT researchers have developed a method for optimizing materials recipes and planning experiments that incorporates information from diverse sources like insights from the literature, chemical compositions, microstructural images, and more. The approach is part of a new platform, named Copilot for Real-world Experimental Scientists (CRESt), that also uses robotic equipment for high-throughput materials testing, the results of which are fed back into large multimodal models to further optimize materials recipes.

Human researchers can converse with the system in natural language, with no coding required, and the system makes its own observations and hypotheses along the way. Cameras and visual language models also allow the system to monitor experiments, detect issues, and suggest corrections.

“In the field of AI for science, the key is designing new experiments,” says Ju Li, School of Engineering Carl Richard Soderberg Professor of Power Engineering. “We use multimodal feedback — for example information from previous literature on how palladium behaved in fuel cells at this temperature, and human feedback — to complement experimental data and design new experiments. We also use robots to synthesize and characterize the material’s structure and to test performance.”

The system is described in a paper published in Nature. The researchers used CRESt to explore more than 900 chemistries and conduct 3,500 electrochemical tests, leading to the discovery of a catalyst material that delivered record power density in a fuel cell that runs on formate salt to produce electricity.

Joining Li on the paper as first authors are PhD student Zhen Zhang, Zhichu Ren PhD ’24, PhD student Chia-Wei Hsu, and postdoc Weibin Chen. Their coauthors are MIT Assistant Professor Iwnetim Abate; Associate Professor Pulkit Agrawal; JR East Professor of Engineering Yang Shao-Horn; MIT.nano researcher Aubrey Penn; Zhang-Wei Hong PhD ’25, Hongbin Xu PhD ’25; Daniel Zheng PhD ’25; MIT graduate students Shuhan Miao and Hugh Smith; MIT postdocs Yimeng Huang, Weiyin Chen, Yungsheng Tian, Yifan Gao, and Yaoshen Niu; former MIT postdoc Sipei Li; and collaborators including Chi-Feng Lee, Yu-Cheng Shao, Hsiao-Tsu Wang, and Ying-Rui Lu.

A smarter system

Materials science experiments can be time-consuming and expensive. They require researchers to carefully design workflows, make new material, and run a series of tests and analysis to understand what happened. Those results are then used to decide how to improve the material.

To improve the process, some researchers have turned to a machine-learning strategy known as active learning to make efficient use of previous experimental data points and explore or exploit those data. When paired with a statistical technique known as Bayesian optimization (BO), active learning has helped researchers identify new materials for things like batteries and advanced semiconductors.

“Bayesian optimization is like Netflix recommending the next movie to watch based on your viewing history, except instead it recommends the next experiment to do,” Li explains. “But basic Bayesian optimization is too simplistic. It uses a boxed-in design space, so if I say I’m going to use platinum, palladium, and iron, it only changes the ratio of those elements in this small space. But real materials have a lot more dependencies, and BO often gets lost.”

Most active learning approaches also rely on single data streams that don’t capture everything that goes on in an experiment. To equip computational systems with more human-like knowledge, while still taking advantage of the speed and control of automated systems, Li and his collaborators built CRESt. 

CRESt’s robotic equipment includes a liquid-handling robot, a carbothermal shock system to rapidly synthesize materials, an automated electrochemical workstation for testing, characterization equipment including automated electron microscopy and optical microscopy, and auxiliary devices such as pumps and gas valves, which can also be remotely controlled.  Many processing parameters can also be tuned.

With the user interface, researchers can chat with CRESt and tell it to use active learning to find promising materials recipes for different projects. CRESt can include up to 20 precursor molecules and substrates into its recipe. To guide material designs, CRESt’s models search through scientific papers for descriptions of elements or precursor molecules that might be useful. When human researchers tell CRESt to pursue new recipes, it kicks off a robotic symphony of sample preparation, characterization, and testing. The researcher can also ask CRESt to perform image analysis from scanning electron microscopy imaging, X-ray diffraction, and other sources.

Information from those processes is used to train the active learning models, which use both literature knowledge and current experimental results to suggest further experiments and accelerate materials discovery.

“For each recipe we use previous literature text or databases, and it creates these huge representations of every recipe based on the previous knowledge base before even doing the experiment,” says Li. “We perform principal component analysis in this knowledge embedding space to get a reduced search space that captures most of the performance variability. Then we use Bayesian optimization in this reduced space to design the new experiment. After the new experiment, we feed newly acquired multimodal experimental data and human feedback into a large language model to augment the knowledgebase and redefine the reduced search space, which gives us a big boost in active learning efficiency.”

Materials science experiments can also face reproducibility challenges. To address the problem, CRESt monitors its experiments with cameras, looking for potential problems and suggesting solutions via text and voice to human researchers.

The researchers used CRESt to develop an electrode material for an advanced type of high-density fuel cell known as a direct formate fuel cell. After exploring more than 900 chemistries over three months, CRESt discovered a catalyst material made from eight elements that achieved a 9.3-fold improvement in power density per dollar over pure palladium, an expensive precious metal. In further tests, CRESTs material was used to deliver a record power density to a working direct formate fuel cell even though the cell contained just one-fourth of the precious metals of previous devices.

The results show the potential for CRESt to find solutions to real-world energy problems that have plagued the materials science and engineering community for decades.

“A significant challenge for fuel-cell catalysts is the use of precious metal,” says Zhang. “For fuel cells, researchers have used various precious metals like palladium and platinum. We used a multielement catalyst that also incorporates many other cheap elements to create the optimal coordination environment for catalytic activity and resistance to poisoning species such as carbon monoxide and adsorbed hydrogen atom. People have been searching low-cost options for many years. This system greatly accelerated our search for these catalysts.”

A helpful assistant

Early on, poor reproducibility emerged as a major problem that limited the researchers’ ability to perform their new active learning technique on experimental datasets. Material properties can be influenced by the way the precursors are mixed and processed, and any number of problems can subtly alter experimental conditions, requiring careful inspection to correct.

To partially automate the process, the researchers coupled computer vision and vision language models with domain knowledge from the scientific literature, which allowed the system to hypothesize sources of irreproducibility and propose solutions. For example, the models can notice when there’s a millimeter-sized deviation in a sample’s shape or when a pipette moves something out of place. The researchers incorporated some of the model’s suggestions, leading to improved consistency, suggesting the models already make good experimental assistants.

The researchers noted that humans still performed most of the debugging in their experiments.

“CREST is an assistant, not a replacement, for human researchers,” Li says. “Human researchers are still indispensable. In fact, we use natural language so the system can explain what it is doing and present observations and hypotheses. But this is a step toward more flexible, self-driving labs.”

###

Written by Zach Winn, MIT News

Researchers from Perugia University describe how AI can unveil the secrets of eruptions



KeAi Communications Co., Ltd.

Graphical abstract 

image: 

Graphical abstract

view more 

Credit: Mónica Ágreda-López, Maurizio Petrelli





Volcanoes are among the most powerful natural hazards on Earth, yet predicting their behavior remains one of the biggest scientific challenges. A new article published in Artificial Intelligence in Geosciences explores how machine learning (ML) can accelerate discoveries in volcano science, while also warning of potential pitfalls if ML is used without critical reflection.

The study, conducted by a duo of researchers from the University of Perugia, involved the analysis of current and emerging applications of artificial intelligence in volcanology.

“While ML tools can process massive amounts of seismic, geochemical, and satellite data far faster than traditional methods, opening up opportunities for earlier hazard detection and improved risk communication — it is not a silver bullet, says corresponding author Maurizio Petrelli. “We need to be aware of what models really learn and why transparency, reproducibility, and interpretability matter when decisions affect public safety in hazard assessment and crisis management.”

“AI can help us see volcanic systems in new ways, but it must be used responsibly,” says co-author Mónica Ágreda-López. “Our goal is not only to show both the opportunities and the risks but also to promote the understanding behind these tools, so that volcano science can benefit from machine learning without losing rigor and transparency.”

The authors call for careful epistemological evaluation, asking not just what AI can do, but how its methods align with scientific reasoning and the needs of society. The duo also stressed that building trust between AI developers, geoscientists, and at-risk communities is key to harnessing these technologies responsibly.

“Interdisciplinary collaborations and open data practices are essential steps to ensure AI contributes to safer, more resilient societies living with volcano hazards. We also need to consider ethics and evolving policies across the EU, China, and the US,” adds Ágreda-López.

###

Contact the author: Maurizio Petrelli, University of Perugia, Italy , maurizio.petrelli@unipg.it

The publisher KeAi was established by Elsevier and China Science Publishing & Media Ltd to unfold quality research globally. In 2013, our focus shifted to open access publishing. We now proudly publish more than 200 world-class, open access, English language journals, spanning all scientific disciplines. Many of these are titles we publish in partnership with prestigious societies and academic institutions, such as the National Natural Science Foundation of China (NSFC).

 

FHBDSR-Net: AI tool enables accurate measurement of diseased spikelet rate of wheat Fusarium Head Blight from phone images, aiding smart phenotyping




Beijing Zhongke Journal Publising Co. Ltd.

FHBDSR-Net: Automated measurement of diseased spikelet rate of Fusarium Head Blight on wheat spikes 

image: 

FHBDSR-Net: AI tool enables accurate measurement of diseased spikelet rate of wheat Fusarium Head Blight from phone images, aiding smart phenotyping.

view more 

Credit: Beijing Zhongke Journal Publising Co. Ltd.






This study is led by Professor Weizhen Liu (School of Computer Science and Artificial Intelligence, Wuhan University of Technology, Wuhan, China). The authors proposed a deep learning algorithm named FHBDSR-Net, which can automatically measure the diseased spikelet rate (DSR) trait from wheat spike images with complex backgrounds captured by mobile phones, providing an efficient and accurate phenotypic measurement tool for wheat Fusarium Head Blight (FHB) resistance breeding.

 

The FHBDSR-Net model integrates three innovative modules: the Multi-scale Feature Enhancement (MFE) module effectively suppresses complex background interference by dynamically fusing lesion texture, morphological features, and lesion-awn contrast features; the Inner-Efficient CIoU (Inner-EfficiCIoU) loss function significantly improves the localization accuracy of dense small targets; the Scale-Aware Attention (SAA) module enhances the encoding capability of multi-scale pathological features and spatial distribution through dilated convolution and self-attention mechanisms.

 

Experimental results show that FHBDSR-Net achieves 93.8% average precision (AP) in diseased spikelet detection, with the average Pearsons correlation coefficient between its DSR measurements and manual observations exceeding 0.901. The model possesses excellent generalization ability and robustness, and exhibits high accuracy in detecting the DSR of wheat spikes with different varieties, growth stages, and infection degrees. Meanwhile, FHBDSR-Net also features lightweight properties with only 7.2M parameters, which can be adapted for deployment on resource-constrained mobile terminals. It can support the accurate acquisition of DSR trait in greenhouse and field scenarios, further providing efficient and reliable technical support for wheat FHB resistance breeding screening and field disease dynamic monitoring, and promoting the upgrading of plant phenotyping analysis towards field portability and intelligence.

See the article:

FHBDSR-Net: Automated measurement of diseased spikelet rate of Fusarium Head Blight on wheat spikes

https://link.springer.com/article/10.1007/s42994-025-00245-0

Robot or human? It depends on the situation, a large study shows



Although many of us are still skeptical of chatbots, algorithms, and robots, the artificial agents actually do well dealing with customers. Sometimes even better than humans



Aalborg University






When we shop online, a chatbot answers our questions. A virtual assistant helps us track a package. And an AI system guides us through a return of our goods.

We have become accustomed to technology being a customer service representative. But is it actually important to us as customers whether a human or a machine is helping us?

A new international meta-analysis shows that artificial agents are in many cases more positively received than you might think. They are not necessarily perceived as better than flesh-and-blood employees – but the difference between them is often smaller than we might expect.

The study was conducted by four researchers, including Professor Holger Roschk from Aalborg University Business School. Along with Katja Gelbrich, Sandra Miederer and Alina Kerath from Catholic University Eichstätt-Ingolstadt, he analyzed 327 experimental studies with almost 282,000 participants and published the results in the prestigious Journal of Marketing.

 

We also see that artificial agents, contrary to what you might expect, have certain advantages in situations where a negative response must be given. This may be the case when an algorithm rejects a loan application

Holger Roschk

Professor, Aalborg University Business School

 

"At a time when AI is being integrated everywhere – from banking to healthcare – it is crucial to know when and how we as humans accept machines. It is often postulated that customers prefer to talk to a human, and many are skeptical of machines. But when we look at customers' actual behaviour – whether they follow advice, buy something or return – the differences are often small," says Holger Roschk.

Context determines effect

The human element has previously been highlighted when research has discussed whether an artificial agent is good or bad. But according to Holger Roschk, this approach lacks nuance, and there is a big difference in what the different agents are good at. The human element is not always crucial – it depends on the task and the situation.

For example, chatbots perform particularly well in situations that customers may perceive as embarrassing - for example, when buying health-related or intimate products. In these transactions, many people prefer a discreet digital contact to a human.

"We may have overestimated the need for artificial agents to be human-like. This is not always necessary – in fact, it can be an advantage that they appear as distinct machines," says Holger Roschk.

He adds that algorithms work well in situations where, for example, you need to calculate the shortest route or estimate waiting time. They also perform well for recommendations like getting the right clothing sizes on a web shop.

Robots that have a physical presence perform best in tasks where motor skills and practical help are needed – for example, room service in hotels or tasks in a warehouse

"We also see that artificial agents, contrary to what you might expect, have certain advantages in situations where a negative response must be given.
This may be the case when an algorithm rejects a loan application," says Holger Roschk, adding that this is probably because the machine's "insensitivity" can have a disarming effect

Technology with clear limits

Holger Roschk emphasizes that artificial agents are not a substitute for humans. Technology has clear limits. In situations where empathy, spontaneity and situational awareness are particularly necessary, it is crucial to have people in the shop and behind the screen.

"We recommend that companies focus on using artificial agents in situations where they can relieve employees of physically or mentally demanding tasks. It's not about replacing people – it's about using technology where it makes sense," says Holger Roschk.

No comments: