Researchers build brain-inspired computer prototype
Small-scale neuromorphic computer prototype learns patterns and makes predictions with fewer training computations and less power
University of Texas at Dallas
image:
Dr. Joseph S. Friedman and his colleagues at The University of Texas at Dallas created a computer prototype that learns patterns and makes predictions using fewer training computations than conventional artificial intelligence systems.
view moreCredit: The University of Texas at Dallas
Could computers ever learn more like humans do, without relying on artificial intelligence (AI) systems that must undergo extremely expensive training?
Neuromorphic computing could be the answer. This emerging technology features brain-inspired computer hardware that could perform AI tasks much more efficiently with far fewer training computations using much less power than conventional systems. Consequently, neuromorphic computers also have the potential to reduce reliance on energy-intensive data centers and bring AI inference and learning to mobile devices.
Dr. Joseph S. Friedman, associate professor of electrical and computer engineering at The University of Texas at Dallas, and his team of researchers in the NeuroSpinCompute Laboratory have taken an important step forward in building a neuromorphic computer by creating a small-scale prototype that learns patterns and makes predictions using fewer training computations than conventional AI systems. Their next challenge is to scale up the proof-of-concept to larger sizes.
“Our work shows a potential new path for building brain-inspired computers that can learn on their own,” Friedman said. “Since neuromorphic computers do not need massive amounts of training computations, they could power smart devices without huge energy costs.”
The team described the prototype in a study published online Aug. 4 in the Nature journal Communications Engineering along with researchers from Everspin Technologies Inc. and Texas Instruments. Friedman is co-corresponding author of the study, along with Dr. Sanjeev Aggarwal, president and CEO of Everspin.
Conventional computers and graphical processing units keep memory storage separate from the information processing. As a result, they cannot make AI inferences as efficiently as the human brain can. They also require large amounts of labeled data and an enormous number of complex training computations. The costs of these training computations can be hundreds of millions of dollars.
Neuromorphic computers integrate memory storage with processing, which allows them to perform AI operations with much greater efficiency and lower costs. Neuromorphic hardware is inspired by the brain, where networks of neurons and synapses process and store information, respectively. The synapses form the connections between neurons, strengthening or weakening based on patterns of activity. This allows the brain to adapt continuously as it learns.
Friedman’s approach builds on a principle proposed by neuropsychologist Dr. Donald Hebb, referred to as Hebb’s law: neurons that fire together, wire together.
“The principle that we use for a computer to learn on its own is that if one artificial neuron causes another artificial neuron to fire, the synapse connecting them becomes more conductive,” Friedman said.
A major innovation in Friedman’s design is the use of magnetic tunnel junctions (MTJs), nanoscale devices that consist of two layers of magnetic material separated by an insulating layer. Electrons can travel, or tunnel, through this barrier more easily when the magnetizations of the layers are aligned in the same direction and less easily when they are aligned in opposite directions.
In neuromorphic systems, MTJs can be connected in networks to mimic the way the brain processes and learns patterns. As signals pass through MTJs in a coordinated manner, their connections adjust to strengthen certain pathways, much as synaptic connections in the brain are reinforced during learning. The MTJs’ binary switching makes them reliable for storing information, resolving a challenge that has long impeded alternative neuromorphic approaches.
Other Erik Jonsson School of Engineering and Computer Science-affiliated researchers involved in the study include first author Peng Zhou PhD’23; Alexander J. Edwards MS’24, PhD’24; and Stephen K. Henrich-Barna MS’97, PhD’23, who also was affiliated with Texas Instruments. In addition to Aggarwal, the other author from Everspin Technologies is Dr. Frederick B. Mancoff.
Friedman’s research is supported by a Faculty Early Career Development Program award from the National Science Foundation and a grant from the Semiconductor Research Corp. through UT Dallas’ Texas Analog Center of Excellence.
In September, the U.S. Department of Energy awarded Friedman a $498,730 two-year grant to provide additional support for his research on neuromorphic computing.
Journal
Communications Engineering
Method of Research
Experimental study
Subject of Research
Not applicable
Article Title
Neuromorphic Hebbian learning with magnetic tunnel junction synapses
Harnessing magnetism for faster, greener computing
University of Delaware engineers uncover a new way to detect and control magnetic waves using electric signals
University of Delaware
A team of engineers at the University of Delaware has discovered a novel way to link the magnetic and electric worlds of computing – a breakthrough that could one day enable computers to run faster and with far greater energy efficiency.
In a new study published in Proceedings of the National Academy of Sciences, researchers from UD’s Center for Hybrid, Active and Responsive Materials (CHARM), a National Science Foundation–funded Materials Research Science and Engineering Center, reveal that magnons – tiny magnetic waves that travel through materials – can generate measurable electric signals.
This finding could open the door to computer chips that integrate magnetic and electric components directly, eliminating the back-and-forth energy transfer that slows today’s devices.
Unlike the flow of charged electrons, which encounter resistance and lose energy as heat, magnons carry information through the coordinated “spin” of electrons – tiny magnetic moments that can be thought of as waves traveling through a material. The UD team’s theoretical models show that when these magnetic waves move through antiferromagnetic materials, they can create electric polarization – essentially producing a detectable voltage.
Because antiferromagnetic magnons can travel at terahertz frequencies – roughly a thousand times faster than those in standard magnets – the discovery also offers a potential pathway toward ultrafast, low-power computing. The UD team is now working to experimentally confirm their predictions and explore how magnons interact with light, which could provide an even more efficient way to control them.
This research is part of CHARM’s broader mission to design hybrid quantum materials for advanced technologies.
Co-authors include Federico Garcia-Gaitan, Yafei Ren, M. Benjamin Jungfleisch, John Q. Xiao, Branislav K. Nikolić, Joshua Zide, and Garnett W. Bryant (NIST/University of Maryland). The work was supported by the National Science Foundation under award DMR-2011824.
Journal
Proceedings of the National Academy of Sciences
Method of Research
Computational simulation/modeling
Subject of Research
Not applicable
Article Title
Magnon-induced electric polarization and magnon Nernst effects
Article Publication Date
23-Oct-2025
Reinforcement learning and blockchain: new strategies to secure the Internet of Medical Things
Improved accuracy, lowers latency, and adapts to evolving threats in real time
image:
(A) Internet of Medical Things (IoMT) devices collect medical data then encrypt it and sent to a blockchain for secure storage. (B) Reinforcement learning (RL) agents monitor activity to detect potential cyber threats and can react and enforce security policy dynamically if threats are identified.
view moreCredit: Dounia Doha et al.
Critical concerns regarding the security and privacy of information transmitted within Internet of Medical Things systems have increased greatly, since these systems manage and generate substantial amounts of sensitive private data. Current traditional security methods have not yet adapted to evolving cyber threats, making the need for data security in medical settings crucial. Recently, a security framework based on blockchain technology and distributed reinforcement learning has been developed to address these challenges. The new framework ensures that data are stored securely and transmitted reliably while minimizing resource usage, and it also enables security measures to adapt to changing threat patterns and enhance system resilience against attacks. In summary, the new method demonstrated improved memory consumption and transaction latency compared to existing approaches, while maintaining high data throughput. This work was published in Intelligent Computing, a Science Partner Journal, under the title “Privacy-Preserving Strategies in the Internet of Medical Things Using Reinforcement Learning and Blockchain” by Dounia Doha and Ping Guo.
The new method achieved an accuracy of 88% in detecting address resolution protocol man-in-the-middle attacks, which is higher than traditional methods such as support vector machines (83%), random forests (75%), and decision trees (68%). The latency was also the lowest, at 45 ms, whereas the older models suffered from values ranging between 85 and 110 ms. The false-positive rate was the lowest for the new framework at 6%, compared to 12–20% for the others. Resource utilization efficiency reached 80%, though memory usage was also the highest at 320 MB. In the Mirai botnet dataset, the new method demonstrated clear advantages by continuously refining detection strategies from incoming data, allowing it to respond to emerging threats more effectively than static models.
The authors based their reinforcement learning on a deep Q network and used Hyperledger Fabric as the foundational blockchain. The framework outperformed traditional machine learning approaches in adaptability and attack detection accuracy within Internet of Medical Things environments. The deep Q network was selected as the reinforcement learning algorithm because it balances adaptability and computational cost better than policy-based methods such as proximal policy optimization and asynchronous advantage actor–critic. While policy-based methods require continuous updates to both actor and critic networks, deep Q networks use a Q-value function, which reduces computational overhead and avoids the need for complex continuous-action modeling required in algorithms like deep deterministic policy gradient. This makes it better suited for resource-constrained devices. Hyperledger Fabric, known for its relatively light resource consumption and high transaction throughput, was used to facilitate secure data validation and storage of data from Internet of Medical Things sensors.
Even though this new framework outperformed older methods in terms of adaptability, attack detection accuracy, and suitability for continuous monitoring in high-stakes healthcare applications, further research is needed to increase its applicability and efficiency. At present, the computational and memory demands remain high. Optimizing the computational footprint of reinforcement learning would make the framework more suitable for edge devices and distributed Internet of Things environments with limited resources. Future models may integrate robust security with stronger privacy-preserving techniques, such as federated learning, to safeguard sensitive medical data. Other options, such as hybrid approaches that combine the adaptability of reinforcement learning with the lower resource demands of traditional algorithms, may also provide balanced solutions for Internet of Medical Things environments.
Journal
Intelligent Computing
Method of Research
Experimental study
Subject of Research
Not applicable
Article Title
Privacy-Preserving Strategies in the Internet of Medical Things Using Reinforcement Learning and Blockchain
No comments:
Post a Comment