AI just got 100-fold more energy efficient
Nanoelectronic device performs real-time AI classification without relying on the cloud
Peer-Reviewed Publication- AI is so energy hungry that most data analysis must be performed in the cloud
- New energy-efficient device enables AI tasks to be performed within wearables
- This allows real-time analysis and diagnostics for faster medical interventions
- Researchers tested the device by classifying 10,000 electrocardiogram samples
- The device successfully identified six types of heart beats with 95% accuracy
EVANSTON, Ill. — Forget the cloud.
Northwestern University engineers have developed a new nanoelectronic device that can perform accurate machine-learning classification tasks in the most energy-efficient manner yet. Using 100-fold less energy than current technologies, the device can crunch large amounts of data and perform artificial intelligence (AI) tasks in real time without beaming data to the cloud for analysis.
With its tiny footprint, ultra-low power consumption and lack of lag time to receive analyses, the device is ideal for direct incorporation into wearable electronics (like smart watches and fitness trackers) for real-time data processing and near-instant diagnostics.
To test the concept, engineers used the device to classify large amounts of information from publicly available electrocardiogram (ECG) datasets. Not only could the device efficiently and correctly identify an irregular heartbeat, it also was able to determine the arrhythmia subtype from among six different categories with near 95% accuracy.
The research will be published on Oct. 12 in the journal Nature Electronics.
“Today, most sensors collect data and then send it to the cloud, where the analysis occurs on energy-hungry servers before the results are finally sent back to the user,” said Northwestern’s Mark C. Hersam, the study’s senior author. “This approach is incredibly expensive, consumes significant energy and adds a time delay. Our device is so energy efficient that it can be deployed directly in wearable electronics for real-time detection and data processing, enabling more rapid intervention for health emergencies.”
A nanotechnology expert, Hersam is Walter P. Murphy Professor of Materials Science and Engineering at Northwestern’s McCormick School of Engineering. He also is chair of the Department of Materials Science and Engineering, director of the Materials Research Science and Engineering Center and member of the International Institute of Nanotechnology. Hersam co-led the research with Han Wang, a professor at the University of Southern California, and Vinod Sangwan, a research assistant professor at Northwestern.
Before machine-learning tools can analyze new data, these tools must first accurately and reliably sort training data into various categories. For example, if a tool is sorting photos by color, then it needs to recognize which photos are red, yellow or blue in order to accurately classify them. An easy chore for a human, yes, but a complicated — and energy-hungry — job for a machine.
For current silicon-based technologies to categorize data from large sets like ECGs, it takes more than 100 transistors — each requiring its own energy to run. But Northwestern’s nanoelectronic device can perform the same machine-learning classification with just two devices. By reducing the number of devices, the researchers drastically reduced power consumption and developed a much smaller device that can be integrated into a standard wearable gadget.
The secret behind the novel device is its unprecedented tunability, which arises from a mix of materials. While traditional technologies use silicon, the researchers constructed the miniaturized transistors from two-dimensional molybdenum disulfide and one-dimensional carbon nanotubes. So instead of needing many silicon transistors — one for each step of data processing — the reconfigurable transistors are dynamic enough to switch among various steps.
“The integration of two disparate materials into one device allows us to strongly modulate the current flow with applied voltages, enabling dynamic reconfigurability,” Hersam said. “Having a high degree of tunability in a single device allows us to perform sophisticated classification algorithms with a small footprint and low energy consumption.”
To test the device, the researchers looked to publicly available medical datasets. They first trained the device to interpret data from ECGs, a task that typically requires significant time from trained health care workers. Then, they asked the device to classify six types of heart beats: normal, atrial premature beat, premature ventricular contraction, paced beat, left bundle branch block beat and right bundle branch block beat.
The nanoelectronic device was able to identify accurately each arrhythmia type out of 10,000 ECG samples. By bypassing the need to send data to the cloud, the device not only saves critical time for a patient but also protects privacy.
“Every time data are passed around, it increases the likelihood of the data being stolen,” Hersam said. “If personal health data is processed locally — such as on your wrist in your watch — that presents a much lower security risk. In this manner, our device improves privacy and reduces the risk of a breach.”
Hersam imagines that, eventually, these nanoelectronic devices could be incorporated into everyday wearables, personalized to each user’s health profile for real-time applications. They would enable people to make the most of the data they already collect without sapping power.
“Artificial intelligence tools are consuming an increasing fraction of the power grid,” Hersam said. “It is an unsustainable path if we continue relying on conventional computer hardware.”
The study, “Reconfigurable mixed-kernel heterojunction transistors for personalized support vector machine classification,” was supported by the U.S. Department of Energy, National Science Foundation and Army Research Office.
JOURNAL
Nature Electronics
METHOD OF RESEARCH
Experimental study
SUBJECT OF RESEARCH
Not applicable
ARTICLE TITLE
Reconfigurable mixed-kernel heterojunction transistors for personalized support vector machine classification
ARTICLE PUBLICATION DATE
12-Oct-2023
AI researchers expose critical vulnerabilities within major LLMs
Computer scientists from the AI security start-up Mindgard and Lancaster University in the UK have demonstrated that chunks of these LLMs can be copied in less than a week for as little as $50, and the information gained can be used to launch targeted attacks.
Large Language Models (LLMs) such as ChatGPT and Bard have taken the world by storm this year, with companies investing millions to develop these AI tools, and some leading AI chatbots being valued in the billions.
These LLMs, which are increasingly used within AI chatbots, scrape the entire Internet of information to learn and to inform answers that they provide to user-specified requests, known as ‘prompts’.
However, computer scientists from the AI security start-up Mindgard and Lancaster University in the UK have demonstrated that chunks of these LLMs can be copied in less than a week for as little as $50, and the information gained can be used to launch targeted attacks.
The researchers warn that attackers exploiting these vulnerabilities could reveal private confidential information, bypass guardrails, provide incorrect answers, or stage further targeted attacks.
Detailed in a new paper to be presented at CAMLIS 2023 (Conference on Applied Machine Learning for Information Security) the researchers show that it is possible to copy important aspects of existing LLMs cheaply, and they demonstrate evidence of vulnerabilities being transferred between different models.
This attack, termed ‘model leeching’, works by talking to LLMs in such a way – asking it a set of targeted prompts – so that the LLMs elicit insightful information giving away how the model works.
The research team, which focused their study on ChatGPT-3.5-Turbo, then used this knowledge to create their own copy model, which was 100 times smaller but replicated key aspects of the LLM.
The researchers were then able to use this model copy as a testing ground to work out how to exploit vulnerabilities in ChatGPT without detection. They were then able to use the knowledge gleaned from their model to attack vulnerabilities in ChatGPT with an 11% increased success rate.
Dr Peter Garraghan of Lancaster University, CEO of Mindgard, and Principal Investigator on the research, said: “What we discovered is scientifically fascinating, but extremely worrying. This is among the very first works to empirically demonstrate that security vulnerabilities can be successfully transferred between closed source and open source Machine Learning models, which is extremely concerning given how much industry relies on publicly available Machine Learning models hosted in places such as HuggingFace.”
The researchers say their work highlights that although these powerful digital AI technologies have clear uses, there exist hidden weaknesses, and there may even be common vulnerabilities across models.
Businesses across industry are currently or preparing to invest billions in creating their own LLMs to undertake a wide range of tasks such as smart assistants. Financial services and large enterprises are adopting these technologies but researchers say that these vulnerabilities should be a major concern for all businesses that are planning to build or use third party LLMs.
Dr Garraghan said: “While LLM technology is potentially transformative, businesses and scientists alike will have to think very carefully on understanding and measuring the cyber risks associated with adopting and deploying LLMs.”
The paper will be presented at CAMLIS 2023 in Arlington, Virginia USA which is held on October 19 and 20.
The paper’s authors are Lewis Birch, William Hackett, Stefan Trawicki, and Neeraj Suri of Lancaster University, and Peter Garraghan of Lancaster University and Mindgard.
ENDS
No comments:
Post a Comment