Rice anthropologist among first to use AI to uncover new clues that early humans were prey, not predators Were early humans hunters — or hunted?
Rice University
image:
Fossil evidence showing leopard bite marks embedded in a hominin skull.
view moreCredit: Manuel Domínguez-Rodrigo
Were early humans hunters — or hunted?
For decades, researchers believed that Homo habilis — the earliest known species in our genus — marked the moment humans rose from prey to predators. They were thought to be the first stone tool users and among the earliest meat eaters and hunters based on evidence from early archaeological sites.
But fossils of another early human species — African Homo erectus — show they lived alongside H. habilis about 2 million years ago. That raised a new mystery: Which of these two species was actually making tools and eating the meat of hunted animals? Most anthropologists long suspected H. habilis was responsible, which would have placed them in a dominant predatory role.
New findings from a team led by Rice University anthropologist Manuel Domínguez-Rodrigo, in partnership between Rice and the Archaeological and Paleontological Museum of Madrid through the Institute of Evolution in Africa (IDEA), which he co-directs with Enrique Baquedano, challenge that view, revealing that these early humans were still preyed upon by carnivores, likely leopards. The work is published in the Annals of the New York Academy of Sciences.
“We discovered that these very early humans were eaten by other carnivores instead of mastering the landscape at that time,” Domínguez-Rodrigo said.
The breakthrough was made possible by applying artificial intelligence (AI) to fossil analysis, giving researchers insights they could not have reached with traditional methods alone. Domínguez-Rodrigo is among the first anthropologists to use AI for taxon-specific analysis of bone surface damage — training computer vision models to recognize the microscopic tooth mark patterns left by different predators.
“Human experts have been good at finding modifications on prehistoric bones,” he said. “But there were too many carnivores at that time. AI has opened new doors of understanding.”
His team trained deep learning models to distinguish bone damage left by leopards, lions, hyenas, crocodiles and wolves. When the models analyzed marks on H. habilis fossils from Olduvai Gorge in Tanzania, they consistently identified leopard bite marks with high confidence.
“AI is a game changer,” Domínguez-Rodrigo said. “It’s pushing methods that have been stable for 40 years beyond what we imagined. For the first time, we can pinpoint not just that these humans were eaten but by whom.”
The finding challenges a long-standing idea about when and what type of humans began to dominate their environment, showing that even as their brains were beginning to grow, they were still vulnerable.
“The beginning of the human brain doesn’t mean we mastered everything immediately,” Domínguez-Rodrigo said. “This is a more complex story. These early humans, these Homo habilis, were not the ones responsible for that transformation.”
He said it’s a reminder that human evolution wasn’t a single leap from prey to predator but a long, gradual climb and that H. habilis may not have been the turning point researchers once believed.
Domínguez-Rodrigo added that the methods developed for this study could unlock discoveries across anthropology, allowing researchers to analyze other early human fossils in new ways. The work is part of a growing collaboration between Rice and IDEA, where his team is based.
“This is a pioneer center in the use of artificial intelligence to the past,” he said. “It’s one of the first places using AI for paleontological and anthropological research.”
Domínguez-Rodrigo said he hopes this discovery is just the beginning. By applying AI to other fossils, he believes researchers can map when humans truly rose from prey to predator and uncover new chapters in our evolutionary story that have long been hidden.
“It’s extremely stimulating to be the first one to see something for the first time. When you uncover sites that have been hidden from the human eye for more than 2 million years, you’re contributing to how we reconstruct who we are. It’s a privilege and very encouraging.”
The study was co-authored by Marina Vegara Riquelme and Enrique Baquedano, and was supported by the Spanish Ministry of Science and Innovation, the Spanish Ministry of Universities and the Spanish Ministry of Culture.
Journal
Annals of the New York Academy of Sciences
Method of Research
Experimental study
Subject of Research
Animal tissue samples
Article Publication Date
26-Sep-2025
AI system learns from many types of scientific information and runs experiments to discover new materials
The new “CRESt” platform could help find solutions to real-world energy problems that have plagued the materials science and engineering community for decades.
Machine-learning models can speed up the discovery of new materials by making predictions and suggesting experiments. But most models today only consider a few specific types of data or variables. Compare that with human scientists, who work in a collaborative environment and consider experimental results, the broader scientific literature, imaging and structural analysis, personal experience or intuition, and input from colleagues and peer reviewers.
Now, MIT researchers have developed a method for optimizing materials recipes and planning experiments that incorporates information from diverse sources like insights from the literature, chemical compositions, microstructural images, and more. The approach is part of a new platform, named Copilot for Real-world Experimental Scientists (CRESt), that also uses robotic equipment for high-throughput materials testing, the results of which are fed back into large multimodal models to further optimize materials recipes.
Human researchers can converse with the system in natural language, with no coding required, and the system makes its own observations and hypotheses along the way. Cameras and visual language models also allow the system to monitor experiments, detect issues, and suggest corrections.
“In the field of AI for science, the key is designing new experiments,” says Ju Li, School of Engineering Carl Richard Soderberg Professor of Power Engineering. “We use multimodal feedback — for example information from previous literature on how palladium behaved in fuel cells at this temperature, and human feedback — to complement experimental data and design new experiments. We also use robots to synthesize and characterize the material’s structure and to test performance.”
The system is described in a paper published in Nature. The researchers used CRESt to explore more than 900 chemistries and conduct 3,500 electrochemical tests, leading to the discovery of a catalyst material that delivered record power density in a fuel cell that runs on formate salt to produce electricity.
Joining Li on the paper as first authors are PhD student Zhen Zhang, Zhichu Ren PhD ’24, PhD student Chia-Wei Hsu, and postdoc Weibin Chen. Their coauthors are MIT Assistant Professor Iwnetim Abate; Associate Professor Pulkit Agrawal; JR East Professor of Engineering Yang Shao-Horn; MIT.nano researcher Aubrey Penn; Zhang-Wei Hong PhD ’25, Hongbin Xu PhD ’25; Daniel Zheng PhD ’25; MIT graduate students Shuhan Miao and Hugh Smith; MIT postdocs Yimeng Huang, Weiyin Chen, Yungsheng Tian, Yifan Gao, and Yaoshen Niu; former MIT postdoc Sipei Li; and collaborators including Chi-Feng Lee, Yu-Cheng Shao, Hsiao-Tsu Wang, and Ying-Rui Lu.
A smarter system
Materials science experiments can be time-consuming and expensive. They require researchers to carefully design workflows, make new material, and run a series of tests and analysis to understand what happened. Those results are then used to decide how to improve the material.
To improve the process, some researchers have turned to a machine-learning strategy known as active learning to make efficient use of previous experimental data points and explore or exploit those data. When paired with a statistical technique known as Bayesian optimization (BO), active learning has helped researchers identify new materials for things like batteries and advanced semiconductors.
“Bayesian optimization is like Netflix recommending the next movie to watch based on your viewing history, except instead it recommends the next experiment to do,” Li explains. “But basic Bayesian optimization is too simplistic. It uses a boxed-in design space, so if I say I’m going to use platinum, palladium, and iron, it only changes the ratio of those elements in this small space. But real materials have a lot more dependencies, and BO often gets lost.”
Most active learning approaches also rely on single data streams that don’t capture everything that goes on in an experiment. To equip computational systems with more human-like knowledge, while still taking advantage of the speed and control of automated systems, Li and his collaborators built CRESt.
CRESt’s robotic equipment includes a liquid-handling robot, a carbothermal shock system to rapidly synthesize materials, an automated electrochemical workstation for testing, characterization equipment including automated electron microscopy and optical microscopy, and auxiliary devices such as pumps and gas valves, which can also be remotely controlled. Many processing parameters can also be tuned.
With the user interface, researchers can chat with CRESt and tell it to use active learning to find promising materials recipes for different projects. CRESt can include up to 20 precursor molecules and substrates into its recipe. To guide material designs, CRESt’s models search through scientific papers for descriptions of elements or precursor molecules that might be useful. When human researchers tell CRESt to pursue new recipes, it kicks off a robotic symphony of sample preparation, characterization, and testing. The researcher can also ask CRESt to perform image analysis from scanning electron microscopy imaging, X-ray diffraction, and other sources.
Information from those processes is used to train the active learning models, which use both literature knowledge and current experimental results to suggest further experiments and accelerate materials discovery.
“For each recipe we use previous literature text or databases, and it creates these huge representations of every recipe based on the previous knowledge base before even doing the experiment,” says Li. “We perform principal component analysis in this knowledge embedding space to get a reduced search space that captures most of the performance variability. Then we use Bayesian optimization in this reduced space to design the new experiment. After the new experiment, we feed newly acquired multimodal experimental data and human feedback into a large language model to augment the knowledgebase and redefine the reduced search space, which gives us a big boost in active learning efficiency.”
Materials science experiments can also face reproducibility challenges. To address the problem, CRESt monitors its experiments with cameras, looking for potential problems and suggesting solutions via text and voice to human researchers.
The researchers used CRESt to develop an electrode material for an advanced type of high-density fuel cell known as a direct formate fuel cell. After exploring more than 900 chemistries over three months, CRESt discovered a catalyst material made from eight elements that achieved a 9.3-fold improvement in power density per dollar over pure palladium, an expensive precious metal. In further tests, CRESTs material was used to deliver a record power density to a working direct formate fuel cell even though the cell contained just one-fourth of the precious metals of previous devices.
The results show the potential for CRESt to find solutions to real-world energy problems that have plagued the materials science and engineering community for decades.
“A significant challenge for fuel-cell catalysts is the use of precious metal,” says Zhang. “For fuel cells, researchers have used various precious metals like palladium and platinum. We used a multielement catalyst that also incorporates many other cheap elements to create the optimal coordination environment for catalytic activity and resistance to poisoning species such as carbon monoxide and adsorbed hydrogen atom. People have been searching low-cost options for many years. This system greatly accelerated our search for these catalysts.”
A helpful assistant
Early on, poor reproducibility emerged as a major problem that limited the researchers’ ability to perform their new active learning technique on experimental datasets. Material properties can be influenced by the way the precursors are mixed and processed, and any number of problems can subtly alter experimental conditions, requiring careful inspection to correct.
To partially automate the process, the researchers coupled computer vision and vision language models with domain knowledge from the scientific literature, which allowed the system to hypothesize sources of irreproducibility and propose solutions. For example, the models can notice when there’s a millimeter-sized deviation in a sample’s shape or when a pipette moves something out of place. The researchers incorporated some of the model’s suggestions, leading to improved consistency, suggesting the models already make good experimental assistants.
The researchers noted that humans still performed most of the debugging in their experiments.
“CREST is an assistant, not a replacement, for human researchers,” Li says. “Human researchers are still indispensable. In fact, we use natural language so the system can explain what it is doing and present observations and hypotheses. But this is a step toward more flexible, self-driving labs.”
###
Written by Zach Winn, MIT News
Journal
Nature
Article Title
"A multimodal robotic platform for multi-element electrocatalyst discovery"
Researchers from Perugia University describe how AI can unveil the secrets of eruptions
image:
Graphical abstract
view moreCredit: Mónica Ágreda-López, Maurizio Petrelli
Volcanoes are among the most powerful natural hazards on Earth, yet predicting their behavior remains one of the biggest scientific challenges. A new article published in Artificial Intelligence in Geosciences explores how machine learning (ML) can accelerate discoveries in volcano science, while also warning of potential pitfalls if ML is used without critical reflection.
The study, conducted by a duo of researchers from the University of Perugia, involved the analysis of current and emerging applications of artificial intelligence in volcanology.
“While ML tools can process massive amounts of seismic, geochemical, and satellite data far faster than traditional methods, opening up opportunities for earlier hazard detection and improved risk communication — it is not a silver bullet, says corresponding author Maurizio Petrelli. “We need to be aware of what models really learn and why transparency, reproducibility, and interpretability matter when decisions affect public safety in hazard assessment and crisis management.”
“AI can help us see volcanic systems in new ways, but it must be used responsibly,” says co-author Mónica Ágreda-López. “Our goal is not only to show both the opportunities and the risks but also to promote the understanding behind these tools, so that volcano science can benefit from machine learning without losing rigor and transparency.”
The authors call for careful epistemological evaluation, asking not just what AI can do, but how its methods align with scientific reasoning and the needs of society. The duo also stressed that building trust between AI developers, geoscientists, and at-risk communities is key to harnessing these technologies responsibly.
“Interdisciplinary collaborations and open data practices are essential steps to ensure AI contributes to safer, more resilient societies living with volcano hazards. We also need to consider ethics and evolving policies across the EU, China, and the US,” adds Ágreda-López.
###
Contact the author: Maurizio Petrelli, University of Perugia, Italy , maurizio.petrelli@unipg.it
The publisher KeAi was established by Elsevier and China Science Publishing & Media Ltd to unfold quality research globally. In 2013, our focus shifted to open access publishing. We now proudly publish more than 200 world-class, open access, English language journals, spanning all scientific disciplines. Many of these are titles we publish in partnership with prestigious societies and academic institutions, such as the National Natural Science Foundation of China (NSFC).
Journal
Artificial Intelligence in Geosciences
Method of Research
Literature review
Subject of Research
Not applicable
Article Title
Opportunities, epistemological assessment and potential risks of machine learning applications in volcano science
“Why can’t we all just get along?” Study reveals how mice and AI learn to cooperate
Research identifies shared neural mechanisms behind cooperation in both biological brains and artificial intelligence systems
At a time when conflict and division dominate the headlines, a new study from UCLA finds remarkable similarities in how mice and artificial intelligence systems each develop cooperation: working together toward shared goals. Both biological brains and AI neural networks developed similar behavioral strategies and neural representations when coordinating their actions, suggesting there are fundamental principles of cooperation that transcend biology and technology.
Why it matters
Cooperation is fundamental to human society and essential for everything from teamwork in the workplace to international diplomacy. Understanding how cooperation emerges and is maintained has profound implications for addressing social conflict and treating disorders that affect social behavior, but also for designing better AI systems. Cooperation plays a vital role in enhancing individual fitness and promoting group survival, whereas its breakdown often leads to detrimental social conflict and instability. As artificial intelligence systems become increasingly sophisticated, researchers have found that both biological and artificial agents can exhibit similar behavioral strategies and neural representations, opening new possibilities for understanding how cooperative behavior emerges when artificial agents interact and whether such interactions are driven by neural network dynamics that resemble those in biological systems. This research provides the first direct comparison of cooperative learning between biological brains and artificial intelligence, offering new insights into one of the most important aspects of social behavior while advancing our understanding of how to create more collaborative AI systems.
What the study did
Researchers from UCLA developed an innovative behavioral task where pairs of mice had to coordinate their actions within increasingly narrow time windows (ultimately just 0.75 seconds) to receive rewards. Using advanced calcium imaging technology, they recorded the activity of individual brain cells in the anterior cingulate cortex (ACC) while mice performed the task. The team then created artificial intelligence agents using multi-agent reinforcement learning and trained them on a similar cooperation task in a virtual environment. This parallel approach allowed direct comparison of how biological and artificial systems learn cooperative behavior.
What they found
Mice successfully learned to coordinate their action and gain mutual reward. Mice developed three key behavioral strategies: approaching their partner's side of the chamber, waiting for their partner to arrive before nose-poking, and engaging in mutual interactions prior to making decisions. These behaviors increased substantially during training, with interaction behavior more than doubling as mice became more skilled at cooperation.
The research revealed that neurons in the anterior cingulate cortex encoded these cooperative behaviors and decision-making processes, with animals showing better cooperative performance having stronger neural representations of their partner’s information. Remarkably, when researchers inhibited ACC activity, cooperation decreased substantially, proving this brain region is essential for coordinated behavior.
The artificial intelligence agents developed strikingly similar strategies to the mice, including waiting behavior and precise coordination of actions. Both biological brains and artificial networks organized into functional groups that enhanced their response to cooperative stimuli, with partner-related information becoming increasingly important as coordination improved. When researchers selectively disrupted specific cooperation-related neurons in the AI systems, cooperation performance dropped dramatically, confirming that specialized neural circuits drive successful cooperation in both biological and artificial intelligence.
What's next
The research team plans to investigate whether similar neural mechanisms exist in other brain regions involved in social behavior and examine whether understanding these fundamental cooperation principles could advance broader knowledge of how social behaviors develop and function. The parallel between biological and artificial systems suggests that principles derived from studying animal cooperation could inform the design of more sophisticated collaborative AI systems, while AI models could help researchers test hypotheses about brain function that would be difficult to examine in living animals.
From the experts
"We found striking parallels between how mice and AI agents learn to cooperate," said Weizhe Hong, the study's senior author and professor in the UCLA Departments of Neurobiology and Biological Chemistry. "Both systems independently developed similar behavioral strategies and neural representations, suggesting there are fundamental computational principles underlying cooperation that transcend the boundary between biological and artificial intelligence."
This study is part of Hong's broader research program examining prosocial behavior across biological and artificial systems. His recent Nature study on inter-brain neural dynamics showed that both mice and AI systems develop remarkably similar "shared neural spaces" when engaging in social interactions. Combined with his 2024 Nature work on how the anterior cingulate cortex regulates helping behavior toward others in pain and his 2025 Science work on how animals display rescue-like behavior to unconscious mice, these findings reveal a comprehensive picture of the neural mechanisms underlying different forms of prosocial behavior. "Understanding cooperation is crucial for addressing some of society's biggest challenges," Hong added. "By studying how both biological brains and AI systems learn to work together, we can better understand the neural basis of human social behavior while also creating more collaborative artificial intelligence."
About the study Neural basis of cooperative behavior in biological and artificial intelligence systems, Science, 2025.
The study was led by Weizhe Hong and Jonathan C. Kao at UCLA. Authors: Mengping Jiang, Linfan Gu, Mingyi Ma, Qin Li, and Weizhe Hong, Departments of Neurobiology and Biological Chemistry, David Geffen School of Medicine at UCLA and Department of Bioengineering, Henry Samueli School of Engineering at UCLA; Jonathan C. Kao, Departments of Electrical and Computer Engineering and Computer Science, Henry Samueli School of Engineering at UCLA.
This work was supported by National Institutes of Health grants R01 NS113124, R01 MH130941, RF1 NS132912, R01 MH132736 (to W.H.), NIH DP2 NS122037 (to J.C.K.), NSF CAREER 194346 (to J.C.K.), a Packard Fellowship in Science and Engineering, a Vallee Scholar Award, and a Mallinckrodt Scholar Award (to W.H.). J.C.K. is a co-founder of Luke Health and is on its board of directors. The other authors declare no competing interests.
Journal
Science
Article Title
Neural basis of cooperative behavior in biological and artificial intelligence systems
Article Publication Date
25-Sep-2025
COI Statement
J.C.K. is a co-founder of Luke Health and is on its board of directors. The other authors declare no competing interests.
FHBDSR-Net: AI tool enables accurate measurement of diseased spikelet rate of wheat Fusarium Head Blight from phone images, aiding smart phenotyping
Beijing Zhongke Journal Publising Co. Ltd.
image:
FHBDSR-Net: AI tool enables accurate measurement of diseased spikelet rate of wheat Fusarium Head Blight from phone images, aiding smart phenotyping.
view moreCredit: Beijing Zhongke Journal Publising Co. Ltd.
This study is led by Professor Weizhen Liu (School of Computer Science and Artificial Intelligence, Wuhan University of Technology, Wuhan, China). The authors proposed a deep learning algorithm named FHBDSR-Net, which can automatically measure the diseased spikelet rate (DSR) trait from wheat spike images with complex backgrounds captured by mobile phones, providing an efficient and accurate phenotypic measurement tool for wheat Fusarium Head Blight (FHB) resistance breeding.
The FHBDSR-Net model integrates three innovative modules: the Multi-scale Feature Enhancement (MFE) module effectively suppresses complex background interference by dynamically fusing lesion texture, morphological features, and lesion-awn contrast features; the Inner-Efficient CIoU (Inner-EfficiCIoU) loss function significantly improves the localization accuracy of dense small targets; the Scale-Aware Attention (SAA) module enhances the encoding capability of multi-scale pathological features and spatial distribution through dilated convolution and self-attention mechanisms.
Experimental results show that FHBDSR-Net achieves 93.8% average precision (AP) in diseased spikelet detection, with the average Pearson’s correlation coefficient between its DSR measurements and manual observations exceeding 0.901. The model possesses excellent generalization ability and robustness, and exhibits high accuracy in detecting the DSR of wheat spikes with different varieties, growth stages, and infection degrees. Meanwhile, FHBDSR-Net also features lightweight properties with only 7.2M parameters, which can be adapted for deployment on resource-constrained mobile terminals. It can support the accurate acquisition of DSR trait in greenhouse and field scenarios, further providing efficient and reliable technical support for wheat FHB resistance breeding screening and field disease dynamic monitoring, and promoting the upgrading of plant phenotyping analysis towards field portability and intelligence.
See the article:
FHBDSR-Net: Automated measurement of diseased spikelet rate of Fusarium Head Blight on wheat spikes
https://link.springer.com/article/10.1007/s42994-025-00245-0
Journal
aBIOTECH
Article Title
FHBDSR-Net: Automated measurement of diseased spikelet rate of Fusarium Head Blight on wheat spikes
Article Publication Date
24-Sep-2025
Robot or human? It depends on the situation, a large study shows
Although many of us are still skeptical of chatbots, algorithms, and robots, the artificial agents actually do well dealing with customers. Sometimes even better than humans
Aalborg University
When we shop online, a chatbot answers our questions. A virtual assistant helps us track a package. And an AI system guides us through a return of our goods.
We have become accustomed to technology being a customer service representative. But is it actually important to us as customers whether a human or a machine is helping us?
A new international meta-analysis shows that artificial agents are in many cases more positively received than you might think. They are not necessarily perceived as better than flesh-and-blood employees – but the difference between them is often smaller than we might expect.
The study was conducted by four researchers, including Professor Holger Roschk from Aalborg University Business School. Along with Katja Gelbrich, Sandra Miederer and Alina Kerath from Catholic University Eichstätt-Ingolstadt, he analyzed 327 experimental studies with almost 282,000 participants and published the results in the prestigious Journal of Marketing.
We also see that artificial agents, contrary to what you might expect, have certain advantages in situations where a negative response must be given. This may be the case when an algorithm rejects a loan application
Holger Roschk
Professor, Aalborg University Business School
"At a time when AI is being integrated everywhere – from banking to healthcare – it is crucial to know when and how we as humans accept machines. It is often postulated that customers prefer to talk to a human, and many are skeptical of machines. But when we look at customers' actual behaviour – whether they follow advice, buy something or return – the differences are often small," says Holger Roschk.
Context determines effect
The human element has previously been highlighted when research has discussed whether an artificial agent is good or bad. But according to Holger Roschk, this approach lacks nuance, and there is a big difference in what the different agents are good at. The human element is not always crucial – it depends on the task and the situation.
For example, chatbots perform particularly well in situations that customers may perceive as embarrassing - for example, when buying health-related or intimate products. In these transactions, many people prefer a discreet digital contact to a human.
"We may have overestimated the need for artificial agents to be human-like. This is not always necessary – in fact, it can be an advantage that they appear as distinct machines," says Holger Roschk.
He adds that algorithms work well in situations where, for example, you need to calculate the shortest route or estimate waiting time. They also perform well for recommendations like getting the right clothing sizes on a web shop.
Robots that have a physical presence perform best in tasks where motor skills and practical help are needed – for example, room service in hotels or tasks in a warehouse
"We also see that artificial agents, contrary to what you might expect, have certain advantages in situations where a negative response must be given.
This may be the case when an algorithm rejects a loan application," says Holger Roschk, adding that this is probably because the machine's "insensitivity" can have a disarming effect
Technology with clear limits
Holger Roschk emphasizes that artificial agents are not a substitute for humans. Technology has clear limits. In situations where empathy, spontaneity and situational awareness are particularly necessary, it is crucial to have people in the shop and behind the screen.
"We recommend that companies focus on using artificial agents in situations where they can relieve employees of physically or mentally demanding tasks. It's not about replacing people – it's about using technology where it makes sense," says Holger Roschk.
Journal
Journal of Marketing
Subject of Research
People
Article Title
Automated Versus Human Agents: A Meta-Analysis of Customer Responses to Robots, Chatbots, and Algorithms and Their Contingencies
AI Innovation Accelerates Geothermal Development
- Artificial intelligence is accelerating the development of commercial-scale geothermal energy by making the exploration and discovery process quicker, cheaper, and more efficient.
- Companies like Zanskar energy are successfully piloting AI-integrated models to find commercially viable geothermal locations, demonstrating the potential for widespread adoption.
- The integration of AI, along with advancements in drilling technologies, is unlocking broader potential for geothermal energy, which is gaining significant investment and bipartisan support due to its ability to provide 24/7 carbon-free power.
Artificial Intelligence integration could seriously accelerate the arrival of a tipping point for commercial-scale geothermal energy development across the United States. While geothermal energy is abundantly available deep under our feet, finding optimal places to drill for it is no easy task. But AI could dramatically change the exploration and discovery process to make geothermal development quicker, cheaper, and more efficient.
“To grow as a national solution, geothermal must overcome significant technical and non-technical barriers in order to reduce cost and risk,” reads a 2019 U.S. Department of Energy (DOE) report on geothermal resources development. “The subsurface exploration required for geothermal energy is foremost among these barriers, given the expense, complexity, and risk of such activities.”
A Utah-based startup called Zanskar energy is currently piloting a novel AI-integrated model to overcome these barriers. The company has demonstrated repeated success in finding commercially viable locations for geothermal energy production. The first of these projects, the Lightning Dock facility in New Mexico, has proven to be “the most productive pumped geothermal well in the U.S.” according to Zanskar’s own estimation. And now, the company has announced a second deep geothermal site discovery at northern Nevada’s Pumpernickel site.
“Our vertically-integrated, AI-native approach to geothermal development is delivering the speed to discovery and speed to development necessary to meet the new paradigm of rapid energy demand growth,” Zanskar co-founder and CEO Carl Hoiland was quoted by Clean Technica. “These latest deep drilling results only further confirm what we’ve known all along: conventional geothermal resources are far more abundant and bigger than previously believed, and are the lowest-cost route to delivering gigawatts of reliable, carbon-free, baseload power at scale.”
Geothermal has emerged as a promising clean energy technology that can provide 24/7 power production, unlike variable renewable energy sources such as wind and solar. This means that it could help to shore up energy security in an era that energy demand growth is outpacing clean energy expansion. And while geothermal has historically provided just a tiny fraction of the world’s clean energy, that could all change very quickly thanks to rapid advancements in technology.
In addition to advances in exploration and modelling thanks to AI integration and other advancements, enhanced drilling technologies have unlocked much broader potential to tap into geothermal energy from almost anywhere in the world. Heat from the Earth’s core is theoretically accessible from anywhere, provided you can dig deep enough. And enhanced geothermal systems are finding ways to dig down deeper than ever, thanks to innovations built on technology borrowed from the oil and gas sector and even nuclear fusion.
While AI is helping to revolutionize geothermal energy, it’s also a huge part of the reason that geothermal is gaining so much interest to begin with. While the spread of artificial intelligence ratchets up global energy demand growth projections, private and public investors are increasingly adopting an all-of-the-above approach to energy production. And in the United States, geothermal is one of the rare clean energies that has broad bipartisan support. As such, the sector is one of many cashing in on the “artificial intelligence-driven power boom” according to Bloomberg.
If geothermal energy continues to break technological barriers while enjoying a supportive policy environment, the potential for the sector is considerable. A report released earlier this year by New York-based research firm the Rhodium Group estimates that “geothermal could economically meet up to 64% of expected demand growth by the early 2030s.” This marks a remarkable rate of expansion – currently, geothermal energy makes up just 0.4% of the United States energy mix.
However, advances in geothermal technology and policy have majorly outpaced talent generation, presenting a considerable speed bump for development. The sector is already facing a critical workforce shortage that will be greatly exacerbated by the industry’s rapid projected growth. But as the sector becomes a more visible and viable employer, hopefully the next generation of engineers and geologists will start catching on.
By Haley Zaremba for Oilprice.com
Artificial intelligence in society and
research: Leopoldina Annual Assembly
opens in Halle (Saale)
Leopoldina
Artificial intelligence in all its facets is the focus of this year’s Annual Assembly of the German National Academy of Sciences Leopoldina, which takes place in Halle (Saale) today, Thursday 25 September, and tomorrow, Friday 26 September. The event brings together renowned experts from various disciplines to discuss current developments in AI research, their possible uses, and what this means for society. To open the event, Dr Lydia Hüskens, Deputy Minister President and Minister for Infrastructure and Digital Affairs of the State of Saxony-Anhalt, and Dr Rolf-Dieter Jungk, State Secretary at the German Federal Ministry of Research, Technology and Space (BMFTR), will give welcome addresses. All the Annual Assembly lectures will also be livestreamed. “The progressive development of artificial intelligence creates enormous opportunities for research, medicine, communications, and many other areas – but also raises questions about risks and responsibility,” says Leopoldina President Professor Dr Bettina Rockenbach. “Building on its broad interdisciplinary expertise, the Leopoldina is focusing on all aspects of artificial intelligence at this year’s Annual Assembly. The lectures will examine technological breakthroughs and specific potential uses of AI, for example in medicine, the geosciences, and physics, as well as ethical questions arising from AI use.” At today’s opening, the computer scientist Dr Cordelia Schmid will give a keynote lecture on the development of artificial intelligence, its current capabilities and potential future uses. Following the lecture, there will be a podium discussion featuring researchers from the Leopoldina and Die Junge Akademie. The discussion is titled, “Artificial intelligence in the services of humans – (How) can we achieve this?”, and features the AI researchers Professor Dr Niki Kilbertus and Professor Dr Nadja Klein, the computer scientist Dr Cordelia Schmid, and the innovation researcher Professor PhD Dietmar Harhoff. The journalist Christoph Drösser will moderate the discussion. The second day of the Annual Assembly will focus on the many areas in which AI can be used. The electrical engineer and computer scientist Professor Dr Sami Haddadin, an expert in robotics, will explain how machines learn to move, “think”, and adapt. The meteorologist Professor Dr Susanne Crewell will discuss how AI is revolutionising weather forecasts and can improve our understanding of climate change. The ethical challenges posed by AI will also be a topic. Dr Philipp Lorenz-Spreen, who researches the interplay between human behaviour and the connectivity and functionality of online platforms, will give a lecture on the complex interplay of AI, social media, and democracy. The physician and physicist Professor Dr Moritz Helmstaedter will give the closing lecture, in which he discusses how artificial and biological intelligence mutually inspire each other. The computer scientist Professor Dr Zeynep Akata will receive the award “ZukunftsWissen – the Early Career Award from the Leopoldina and Commerzbank Foundation” at the Annual Assembly. She conducts research in the area of explainable AI and develops AI that combines visual, linguistic, and conceptual elements, and thus makes its decisions comprehensible to humans. Zeynep Akata talks about her research in a video on the Leopoldina’s YouTube channel: https://youtu.be/ZqIHyDOvb_k In addition, the Cothenius Medal 2025 will be awarded to Leopoldina Member Professor Dr Kai Simons for his lifetime’s work in science. The biochemist has studied the function and organisation of cell membranes and done pioneering work for the understanding of the interaction between viruses and host cells. Talented pupils from throughout Germany will also attend this year’s Annual Assembly as guests. They follow the lectures at the event and have the chance to speak to researchers. Funding is provided by the Wilhelm and Else Heraeus Foundation. (Post) doctoral fellows were once again able to apply for funding to participate in the Annual Assembly. The funding was provided by the Friends of the Leopoldina Academy and the Alfried Krupp von Bohlen und Halbach Foundation. The mathematician and computer scientist Professor Dr Dr Thomas Lengauer and the physicist and computer scientist Professor Dr Klaus-Robert Müller were responsible for the scientific coordination of the 2025 Annual Assembly. The two Leopoldina Members talk about the idea behind the event in an interview on the Leopoldina website: https://www.leopoldina.org/en/press/newsletter/interview-thomas-lengauer-and-klaus-robert-mueller/ The lectures are held in either English or German and are simultaneously translated into the other language. The Annual Assembly is livestreamed on the Leopoldina’s YouTube channel: https://www.youtube.com/@nationalakademieleopoldina. The livestream is available in English or German from 2 p.m. to 7 p.m. today, and on Friday from 9 a.m. to 6 p.m. Registration is not required. The full Annual Assembly programme is available here: https://www.leopoldina.org/en/events/event/event/3237/ The Leopoldina on Bluesky: https://bsky.app/profile/leopoldina.org The Leopoldina on LinkedIn: https://www.linkedin.com/company/nationale-akademie-der-wissenschaften-leopoldina The Leopoldina on YouTube: https://www.youtube.com/@nationalakademieleopoldina The Leopoldina on X: https://www.twitter.com/leopoldina About the German National Academy of Sciences Leopoldina |
No comments:
Post a Comment