New AI models could slash energy use while dramatically improving performance
Neuro-symbolic AI combines neural network pattern recognition and generation with higher level symbolic reasoning
image:
AI operations supported by large server facilities like this one in Sandia National Laboratory, or xAI Colossus in Mempis or others in construction such as Stargate by Microsoft and OpenAI, can consume as much energy as a small to mid-size city
view moreCredit: Sandia National Laboratory
Power usage by AI and data center systems in the U.S. is extraordinary by any measure. The International Energy Agency estimates U.S. AI and data centers used about 415 terrawatt hours of power in 2024—more than 10% of that year’s nationwide energy output—and it’s expected to double by 2030.
Seeking to head off this unsustainable path of power consumption, researchers at the School of Engineering have developed a proof-of-concept for efficient AI systems that could use 100 times less energy than current ones, while at the same time providing more accurate results on tasks.
The approach developed in the laboratory of Matthias Scheutz, Karol Family Applied Technology Professor, uses neuro-symbolic AI—a combination of conventional neural network AI with symbolic reasoning similar to the way humans break down tasks and concepts into steps and categories. The research will be presented at the International Conference of Robotics and Automation in Vienna in May and published in the conference proceedings.
Scheutz and his team focus their work on robots interacting with humans, so the AI technologies they employ are not the type of screen-based large language models (LLMs) like ChatGPT and Gemini, for example. Instead, they study visual-language-action (VLA) models, which are an extension of LLMs with visual and movement capabilities for robots. These models use camera and language inputs and respond by generating actions in the real world, like moving a robot’s wheels, legs, arms, and fingers.
Using conventional, resource-intensive VLA approaches, if a robot were asked to pile blocks in a simple tower, the system might scan the setting, identify the location of blocks, their shape and orientation, and interpret the instruction to place each block on top of the other. The attempt to do so may lead to, for example, misinterpreting the shape of a block due to shadows, misplacing a block, or stacking the blocks in a way that they would tip over.
Going back to LLMs by analogy, the missed attempts by the robots are akin to a chatbot providing inaccurate or even completely false results in text or images. Famous examples include making up imagined court cases for legal briefs or showing people with six fingers.
Symbolic reasoning is more efficient than the conventional approach, coming up with more general planning strategies based on puzzle rules and abstract categories like block shape and centers of mass.
How Neuro‑Symbolic Systems Work Better
“Like an LLM, VLA models act on statistical results from large training sets of similar scenarios, but that can lead to errors,” said Scheutz. “A neuro-symbolic VLA can apply rules that limit the amount of trial and error during learning and get to a solution much faster. Not only does it complete the task much faster, but the time spent on training the system is significantly reduced.”
In tests using a standard Tower of Hanoi puzzle, the neuro‑symbolic VLA system had a 95% success rate, compared with 34% for standard VLAs. For a more complex version of the puzzle that the robot had not seen in training, the neuro-symbolic system had a 78% success rate, while standard VLAs failed every attempt.
The neuro-symbolic system could be trained in just 34 minutes, while the standard VLA model took over a day and a half. Significantly, training the neuro-symbolic model used only 1% of the energy required to train a VLA model, and the energy savings continued during execution of tasks with the neuro-symbolic model using only 5% of the energy required for running the VLA.
Scheutz draws parallels to familiar LLMs like ChatGPT or Gemini. “These systems are just trying to predict the next word or action in a sequence, but that can be imperfect, and they can come up with inaccurate results or hallucinations. Their energy expense is often disproportionate to the task. For example, when you search on Google, the AI summary at the top of the page consumes up to 100 times more energy than the generation of the website listings.”
With the explosion in user demand for AI systems, and their integration into industrial applications, there is a competitive arms race for ever larger data center systems—facilities whose power usage can add up to hundreds of megawatts, far more than typically needed to power small cities.
The researchers conclude that current LLMs and VLAs, despite their popularity, may not be the right foundation for energy‑efficient, reliable AI, and may take us right up against a wall of resource limitations. Instead, they suggest that hybrid neuro‑symbolic AI could provide a more sustainable and dependable path forward.
Method of Research
Experimental study
Subject of Research
Not applicable
Article Title
The Price is Not Right: Neuro-Symbolic Approaches Significantly Outperform VLAs in Performance as well as Computational Cost and Energy Efficiency
Article Publication Date
5-Jun-2026
New computer chip material inspired by the human brain could slash AI energy use
University of Cambridge
image:
Dr Babak Bakhit, University of Cambridge
view moreCredit: Babak Bakhit
Researchers have developed a new kind of nanoelectronic device that could dramatically cut the energy consumed by artificial intelligence hardware by mimicking the human brain.
The researchers, led by the University of Cambridge, developed a form of hafnium oxide that acts as a highly stable, low‑energy ‘memristor’ — a component designed to mimic the efficient way neurons are connected in the brain. The results are reported in the journal Science Advances.
Current AI systems rely on conventional computer chips that shuttle data back and forth between memory and processing units. This constant movement consumes large amounts of electricity, and global demand is exploding as AI adoption expands across industries.
Brain-inspired, or neuromorphic, computing is an alternative way to process information that could reduce energy use by as much as 70% by storing and processing information in the same place, and doing so with extremely low power. Such a system would also be far more adaptable, in the same way our own brains are able to learn and adapt.
“Energy consumption is one of the key challenges in current AI hardware,” said lead author Dr Babak Bakhit, from Cambridge’s Department of Materials Science and Metallurgy. “To address that, you need devices with extremely low currents, excellent stability, outstanding uniformity across switching cycles and devices, and the ability to switch between many distinct states.”
Most existing memristors rely on the formation of tiny conductive filaments inside metal oxide material. But these filaments behave unpredictably and typically require high forming and operating voltages, limiting their usefulness in large-scale data storage and computing systems.
The Cambridge team instead created a new type of hafnium-based thin film that switches states in a completely different way. By adding strontium and titanium and growing the film using a two‑step method, the researchers were able to form tiny electronic gates, or ‘p-n junctions’, inside the oxide where the layers meet. This allows the device to change its resistance smoothly by shifting the height of an energy barrier at the interface, rather than by growing or rupturing the filaments.
Bakhit, who is also affiliated with Cambridge’s Department of Engineeirng, said this mechanism overcomes one of the biggest challenges in developing memristor technology. “Filamentary devices suffer from random behaviour,” he said. “But because our devices switch at the interface, they show outstanding uniformity from cycle to cycle and from device to device.”
Using the hafnium-based devices, the researchers achieved switching currents about a million times lower than those of some conventional oxide-based devices. The memristors also produced hundreds of distinct, stable conductance levels, a key requirement for analogue ‘in-memory’ computing.
Laboratory tests showed the devices could reliably endure tens of thousands of switching cycles and store their programmed states for around a day. They also reproduced fundamental learning rules observed in biology, such as spike-timing dependent plasticity: the mechanism by which neurons strengthen or weaken their connections depending on when signals arrive.
“These are the properties you need if you want hardware that can learn and adapt, rather than just store bits,” said Bakhit.
However, there are still some challenges to overcome. The current fabrication process requires temperatures of around 700°C — higher than standard semiconductor manufacturing tolerances. “This is currently the main challenge in our device fabrication process,” said Bakhit. “But we’re now working on ways to bring the temperature down to make it more compatible with standard industry processes.”
Despite this, he believes the technology could ultimately be integrated into chip-scale systems. “If we can reduce the temperature and put these devices onto a chip, it would be a major step forward,” he said.
Bakhit, a materials physicist, said the breakthrough followed several years of unsuccessful experiments. The turning point came late last year when he tried a twist on the two‑stage deposition method, adding oxygen only after the first layer had been grown.
“I spent almost three years on this,” he said. “There were a huge number of failures. But at the end of November, we saw the first really good results. It’s still early days of course, but if we can solve the temperature issue, this technology could be game-changing because the energy consumption is so much lower and at the same time, the device performance is highly promising.”
The research was supported in part by the Swedish Research Council (VR), the Royal Academy of Engineering, the Royal Society, and UK Research and Innovation (UKRI). A patent application has been filed by Cambridge Enterprise, the University’s innovation arm.
Journal
Science Advances
Article Title
HfO2-based Memristive Synapses with Asymmetrically Extended p-n Heterointerfaces for Highly Energy-efficient Neuromorphic Hardware
Article Publication Date
20-Mar-2026
No comments:
Post a Comment