WHERE'S WALDO?
New AI tool scans social media for hidden health risks
An artificial intelligence system called Waldo can spot personal reports of the harmful side effects of popular health products
PLOS
image:
A close-up of a smartphone screen showing social media app icons, namely Reddit.
view moreCredit: Ralph Olazo, Unsplash (CC0, https://creativecommons.org/publicdomain/zero/1.0/)
A new artificial intelligence tool can scan social media data to discover adverse events associated with consumer health products, according to a study published September 30th in the open-access journal PLOS Digital Health by John Ayers of the University of California, San Diego, U.S., and colleagues.
The constant post-market surveillance of the safety of consumer products is crucial for public health and safety. However, current adverse-event reporting systems for approved prescription medications and medical devices depend on voluntary submissions from doctors and manufactures to the U.S. Food and Drug Administration. The rapid growth in consumer health products, such as cannabis-derived products and dietary supplements, has led to the need for new adverse event detection systems.
In the new study, researchers tested the efficacy of a new automated machine learning tool, “Waldo,” that can sift through social media text to find consumer descriptions of adverse events. The tool was tested on its ability to scan Reddit posts to find adverse events (AEs) of cannabis-derived products.
When compared to human AE annotations of a set of Reddit posts, Waldo had an accuracy of 99.7%, far better than a general-purpose ChatGPT chatbot that was given the same set of posts. In a broader dataset of 437,132 Reddit posts, Waldo identified 28,832 potential reports of harm. When the researchers manually validated a random sample of these posts, they found that 86% were true AEs. The team has made Waldo open-source so that anyone—researchers, clinicians, or regulators—can use it.
“Waldo represents a significant advancement in social media-based AE detection, achieving superior performance compared to existing approaches,” the authors say. “Additionally, Waldo's automated approach has broad applicability beyond cannabis-derived products to other consumer health products that similarly lack regulatory oversight.”
Lead author Karan Desai says, “Waldo shows that the health experiences people share online are not just noise, they’re valuable safety signals. By capturing these voices, we can surface real-world harms that are invisible to traditional reporting systems.”
John Ayers adds, “This project highlights how digital health tools can transform post-market surveillance. By making Waldo open-source, we’re ensuring that anyone, from regulators to clinicians, can use it to protect patients.”
Second author Vijay Tiyyala notes, “From a technical perspective, we demonstrated that a carefully trained model like RoBERTa can outperform state-of-the-art chatbots for AE detection. Waldo’s accuracy was surprising and encouraging.”
“By democratizing access to Waldo, the team hopes to accelerate open science and improve safety for patients.”
In your coverage, please use this URL to provide access to the freely available paper in PLOS Digital Health: https://plos.io/4m85w0I
Citation: Desai KS, Tiyyala VM, Tiyyala P, Yeola A, Gallegos-Rangel A, Montiel-Torres A, et al. (2025) Waldo: Automated discovery of adverse events from unstructured self reports. PLOS Digit Health 4(9): e0001011. https://doi.org/10.1371/journal.pdig.0001011
Author countries: United States, India
Funding: The author(s) received no specific funding for this work.
Journal
PLOS Digital Health
Method of Research
Computational simulation/modeling
Subject of Research
Not applicable
COI Statement
Competing interests: I have read the journal’s policy and the authors of this manuscript have the following competing interests: Dr. Ayers owns equity positions in Health Watcher and Good Analytics. Dr. Dredze receives consulting fees from Good Analytics and Bloomberg LP.
MUSC study could help lead to personalized TMS for smokers who want to quit
Medical University of South Carolina
image:
Five large networks in the brain are highlighted. The salience network, which filters what is salient, or important to focus on, is green.
view moreCredit: Image courtesy of the MUSC research team.
Science and artificial intelligence combined at the Medical University of South Carolina in a study that could lead to personalized repetitive transcranial magnetic stimulation, or rTMS, for smokers who want to quit.
“We want to improve the effectiveness and specificity of rTMS and reduce side effects,” said study leader Xingbao Li, M.D.He’s an associate professor in the Department of Psychiatry and Behavioral Sciences who has done extensive research on TMS.
His team published its findings in the journal Brain Connectivity.
TMS uses electromagnetic pulses to affect brain activity and may be best known for its role in treating depression and obsessive-compulsive disorder. Side effects include discomfort at the stimulation site and headaches.
TMS has also been approved by the Food and Drug Administration for smoking. MUSC Health was the first place in South Carolina to offer TMS to smokers. Research has shown that multiple sessions of rTMS, specifically over the left dorsolateral prefrontal cortex of the brain, can cut cravings and cigarette consumption.
The new MUSC study gets even more targeted than that, using a form of AI called machine learning to analyze images from the brain’s neural networks to see if it’s possible to predict which smokers are likely to benefit from multiple sessions of rTMS, also known as repetitive TMS.
To do that, the researchers used functional magnetic resonance imaging, or fMRI, to detect changes in blood flow to measure brain activity. They looked at neural networks when participants were in a resting state, relaxed with their eyes closed, and when they were exposed to smoking photos.
That analysis found that one neural network stood out: the salience network. It filters information to determine what’s salient, or important, to focus on. In the study, connectivity in the salience network was the best predictor of rTMS effectiveness.
“The study gives us a roadmap to extend personalized rTMS and build an fMRI and multimodal biomarker pipeline. The methods can be used for other substance use disorders as well,” Li said.
“Historic studies focus on the reward network in cigarette smokers,” he continued, referring to the parts of the brain involved in motivation and pleasure.
“We were surprised to find that the salience network plays such a crucial role in smoking behavior. This makes the salience network a mechanistic bridge between rTMS neuromodulation and successful smoking cessation.”
They found that bridge with the help of machine learning. In machine learning, computers analyze and learn from data without being programmed to do so. They use algorithms that can spot statistical patterns and adapt to them. That, in turn, lets researchers automate that part of their work and improves accuracy.
In this case, machine learning analyzed data collected during an earlier MUSC study on TMS in smokers.
Here’s how that earlier study was set up. The researchers recruited 42 people who wanted to quit smoking. They were split into two groups. One group got real TMS. The other got sham TMS that felt real. They all spent a minute-and-a-half before each TMS session interacting with things like cigarettes and ashtrays, then during the TMS, real or sham, watched videos of people smoking. There were 10 sessions per person over a two-week period.
In the end, the researchers found participants who got the real TMS “smoked significantly fewer cigarettes per day during the two-week treatment,” were more likely to quit by their target date and had lower cravings for tobacco.
Li said thanks to the fMRI scans that were also part of that work, the newer study was able to build on its findings. “Using machine learning to identify an individual's dysfunctional brain network and then applying rTMS to the dysfunctional network, we can select who prefers to use rTMS or who prefers medicine to help them stop smoking.”
The study was supported by a grant from the National Institutes of Health. The authors didn’t report any conflicts of interest. The research team includes Kevin Caulfield, Ph.D.; Andrew Chen, Ph.D.; Christopher McMahan, Ph.D.; Karen Hartwell, M.D.; Kathleen Brady, M.D., Ph.D.; and Mark George, M.D.
Li said their relatively small study lays the groundwork for larger studies to further explore targeted TMS for smokers. “This demonstrates that MUSC researchers can use novel and high-impact technology to move beyond fixed-target stimulation into precision neuromodulation.”
Journal
Brain Connectivity
Article Title
Salience Network Connectivity Predicts Response to Repetitive Transcranial Magnetic Stimulation in Smoking Cessation: A Preliminary Machine Learning Study
Article Publication Date
25-Sep-2025
AI at the Eyelid: Glasses that track health through your blinks
Penn researchers develop new glasses and AI to track blinking, offering new insight into fatigue, focus and ocular health
University of Pennsylvania School of Engineering and Applied Science
image:
Graduate student Dongyin Hu wears BlinkWise, a device that uses radio and AI to monitor blinks and track eye health.
view moreCredit: Sylvia Zhang
Penn researchers have developed a groundbreaking AI-powered device that turns ordinary glasses into a smart, energy-efficient health monitor by watching you blink.
The device, called BlinkWise, uses radio signals to track eyelid movements with unprecedented detail, all while preserving privacy and using minimal power. The technology opens doors for assessing fatigue, mental workload and eye-related health issues in real-world settings, from long-haul trucking to everyday office work.
“Because BlinkWise brings together low-power radio-wave sensing and edge AI,” says Mingmin Zhao, Assistant Professor in Computer and Information Science (CIS) and one of the lead researchers on the project, “it can sense more efficiently and run advanced health monitoring directly on the glasses with less energy, less space and less data than existing ‘smart’ eyewear.”
While initial applications focus on health and safety, BlinkWise could also serve as the foundation for a new generation of smart eyewear. “For glasses to be truly ‘smart,’” says Zhao, “they need to do more than respond to voice commands and take pictures. They need to actually understand the wearer. Blinkwise is the first step in that direction.”
Eyes: A Window into Health
The average person blinks over 10,000 times per day. Each blink offers a fingerprint of one’s physiological and mental state, capturing information about fatigue, focus, eye dryness and more.
But it's not just whether a person blinks. How they blink matters, too. Researchers refer to this pattern as blink dynamics: characteristics such as blink duration, the completeness of each blink (full vs. partial), and the timing of eyelid closure and reopening. These details provide significantly more information than simply recording whether the eyelid is open or closed.
“The saying goes that the eyes are the window to the soul,” says Lama Al-Aswad, Irene Heinz Given and John La Porte Given Research Professor of Ophthalmology II and a collaborator on the project. “But blinking — how often, how fully, how long — also gives us a window into the body and brain.”
For example, longer eyelid closures can signal drowsiness, a leading cause of car accidents that cost the U.S. economy more than $100 billion annually. An increase in partial blinks can point to dry-eye disease, which affects more than 16 million Americans.
“Blinking is something we do thousands of times per day without thinking, and yet it reflects so much about our health,” says Al-Aswad, who previously collaborated with Zhao to assess cardiovascular risk via the eyes. “Because it’s non-invasive and easy to monitor, blink analysis could become a powerful tool for managing chronic conditions and identifying cognitive changes early.”
More Precise and Portable Blink Analysis
These blink dynamics unfold in milliseconds. Until now, tracking blink dynamics at this resolution demanded stationary, high-speed cameras and specialized equipment.
“Previous systems required a lab setup,” says Dongyin Hu, the lead author of a paper the team presented at MobiSys 2025 and a doctoral student in CIS. “BlinkWise just clips onto your glasses, so you can monitor blinks anywhere.”
Instead of capturing images, BlinkWise bounces radio waves off the eye to detect minute movements of the eyelid. The system translates the signal into an “eye openness score,” a curve that models blinking in real time, rather than simply classifying the eye as open or closed.
This millisecond-level precision offers a major advantage over conventional systems. “Cameras typically record at 30 or 60 frames per second, which isn’t fast enough to fully capture a blink,” Hu explains. “With radio frequency sensors, we can sample thousands of times per second, enabling much more detailed analysis.”
AI That Fits on Your Face
To make BlinkWise wearable, the team had to overcome a key challenge: how to run advanced AI on a tiny device using minimal power. Rather than send data to a smartphone or cloud server, BlinkWise processes everything locally on a chip smaller than a postage stamp.
The researchers adapted techniques from image processing and machine learning to create a compact model that can interpret radio signals in real time. In the process, they also reimagined the entire AI pipeline to suit the tight constraints of wearable hardware and the fact that the signal streams continuously. These new approaches make BlinkWise exceptionally efficient.
“Smart glasses have to do a lot, so we didn’t want blink tracking to drain the battery or take up too much computing power,” says Insup Lee, Cecilia Fitler Moore Professor in CIS and Director of the Penn Research in Embedded Computing and Integrated Systems (PRECISE) Center.
BlinkWise, by contrast, generates only a small, targeted radio wave signal, which means less data to handle and less energy required to analyze it. “In fact,” Lee adds, “our system uses less power to process the radar signal than it would take just to transmit camera footage over Wi-Fi.”
Toward Truly Smart Glasses
While BlinkWise is already useful for assessing fatigue, eye dryness and mental workload, the team sees it as just the beginning.
“Today’s smart glasses can take photos or play audio, but they don’t really understand the wearer,” says Zhao. “We believe devices like BlinkWise are the first step toward eyewear that responds to your cognitive state, not just your voice commands.”
Glasses also offer a unique advantage over watches or phones: they sit directly in front of the eyes, providing a front-row view into mental and physical states. “We see BlinkWise not just as a health monitor,” Hu says, “but as a building block for glasses that are truly intelligent.”
This study was conducted at the University of Pennsylvania School of Engineering and Applied Science and the Scheie Eye Institute of the Perelman School of Medicine and was supported by a seed grant in Trustworthy AI Research for Medicine from the AI-enabled Systems: Safe, Explainable and Trustworthy (ASSET) Center.
Additional co-authors include Xin Yang, Ahhyun Yuh and Zihao Wang of Penn Engineering
From left: Lama Al-Aswad, Dongyin Hu, Ahhyun Yuh and Mingmin Zhao.
Graduate student Dongyin Hu models BlinkWise, which uses AI and radio signals to track blinks and eye health.
From left: Ahhyun Yuh, Lama Al-Aswad, Mingmin Zhao and Dongyin Hu.
Credit
Sylvia Zhang
Method of Research
Experimental study
Subject of Research
Not applicable
Article Title
Tracking Blink Dynamics and Mental States on Glasses
No comments:
Post a Comment