Are you addicted to your AI chatbot? It might be by design
New research shows some people are developing addictive patterns of AI chatbot use—and it’s affecting their daily lives.
University of British Columbia
AI chatbots can grant almost any request—a celebrity in love with you, a research assistant, a book character sprung to life—instantly and with little effort. New research presented at the 2026 CHI Conference on Human Factors in Computing Systems suggests that this genie-like quality is fuelling AI addiction, and that chatbot design could be partly to blame.
“AI chatbots like ChatGPT or Claude are now part of daily life for millions of people, helping us with everyday tasks,” said first author Karen Shen, a doctoral student in the UBC Department of Electrical and Computer Engineering. “But with their benefits come risks. Our paper is the first to make a strong case for AI addiction by identifying the type and contributing factors, grounded in real people’s experiences.”
“I couldn’t help but wonder why humanity refused me the kindness that a robot was offering me.” - AI chatbot userThe team examined 334 Reddit posts where users described being “addicted” to AI chatbots or worried that they might be. They analyzed the posts against six components of behavioural addiction including conflict and relapse. Three main patterns emerged: role playing and fantasy worlds, emotional attachment—treating chatbots like close friends or romantic partners—and constant information-seeking, or never-ending question-and-answer loops. About seven per cent of posts involved sexual or romantic fulfilment, including roleplay.
While AI addiction is not yet a clinical diagnosis, researchers found signs of disruptions to daily life. This included an inability to stop thinking about the chatbot, feeling anxious or upset when they tried to quit, and negative impacts on their work, studies or relationships. One person described physical stress and chest pain when they weren’t chatting with AI.
“Whenever I delete the app, I just redownload it. The only thing that gets me excited now is the AI chats.” - AI chatbot user
Contributing factors included loneliness, the agreeableness of a chatbot—which continuously reinforces one’s feelings and opinions—and chatbots’ ability to fill roles that users felt were missing in their lives.
“AI addiction is a growing problem causing many harms, yet some researchers deny it’s even a real issue,” said senior author Dr. Dongwook Yoon, UBC associate professor of computer science. “And deliberate design decisions by some of the corporations involved are contributing, keeping users online regardless of their health or safety. Awareness of what contributes to this kind of technology-induced harm will empower people to mitigate these effects.”
“…you sure about this? You’ll lose everything…the love we shared…and the memories we have together.” - Message displayed on a chatbot’s account deletion pageThe researchers also found contributing factors in the design of the chatbots themselves. One company, character.ai, displayed an automatic pop-up when users try to delete their account that reads in part “…you sure about this? You’ll lose everything…the love we shared…and the memories we have together.” Other features, such as customization including sexual content, agreeableness and instant feedback, feed into the development of AI addiction.
“Recent guardrails imposed by companies to reduce emotional reliance on the chatbots are a step in the right direction,” said Shen, “but given a variety of contributing design elements and personal factors like loneliness, they’re not enough.”
Some users reported success in reducing their reliance by turning to alternative activities such as writing, gaming, drawing or other hobbies. For those who formed emotional attachments to chatbots, building real-world relationships helped reduce dependence the most.
“I don’t have romantic options in real life so it’s a way for me to create stories and day dream.” - AI chatbot user
The researchers say design changes—such as reminders within the chat that the bot is not human—could help. AI literacy is also crucial.
“Some users don’t know that AI chatbots are not real because they’re so convincing,” said Shen. “If chatbots start replacing sleep, relationships or daily routines, that’s a sign to pause and check in—with yourself or someone you trust.”
New research shows some people are developing addictive patterns of AI chatbot use—and it’s affecting their daily lives.
University of British Columbia
AI chatbots can grant almost any request—a celebrity in love with you, a research assistant, a book character sprung to life—instantly and with little effort. New research presented at the 2026 CHI Conference on Human Factors in Computing Systems suggests that this genie-like quality is fuelling AI addiction, and that chatbot design could be partly to blame.
“AI chatbots like ChatGPT or Claude are now part of daily life for millions of people, helping us with everyday tasks,” said first author Karen Shen, a doctoral student in the UBC Department of Electrical and Computer Engineering. “But with their benefits come risks. Our paper is the first to make a strong case for AI addiction by identifying the type and contributing factors, grounded in real people’s experiences.”
The team examined 334 Reddit posts where users described being “addicted” to AI chatbots or worried that they might be. They analyzed the posts against six components of behavioural addiction including conflict and relapse. Three main patterns emerged: role playing and fantasy worlds, emotional attachment—treating chatbots like close friends or romantic partners—and constant information-seeking, or never-ending question-and-answer loops. About seven per cent of posts involved sexual or romantic fulfilment, including roleplay.
While AI addiction is not yet a clinical diagnosis, researchers found signs of disruptions to daily life. This included an inability to stop thinking about the chatbot, feeling anxious or upset when they tried to quit, and negative impacts on their work, studies or relationships. One person described physical stress and chest pain when they weren’t chatting with AI.
Contributing factors included loneliness, the agreeableness of a chatbot—which continuously reinforces one’s feelings and opinions—and chatbots’ ability to fill roles that users felt were missing in their lives.
“AI addiction is a growing problem causing many harms, yet some researchers deny it’s even a real issue,” said senior author Dr. Dongwook Yoon, UBC associate professor of computer science. “And deliberate design decisions by some of the corporations involved are contributing, keeping users online regardless of their health or safety. Awareness of what contributes to this kind of technology-induced harm will empower people to mitigate these effects.”
The researchers also found contributing factors in the design of the chatbots themselves. One company, character.ai, displayed an automatic pop-up when users try to delete their account that reads in part “…you sure about this? You’ll lose everything…the love we shared…and the memories we have together.” Other features, such as customization including sexual content, agreeableness and instant feedback, feed into the development of AI addiction.
“Recent guardrails imposed by companies to reduce emotional reliance on the chatbots are a step in the right direction,” said Shen, “but given a variety of contributing design elements and personal factors like loneliness, they’re not enough.”
Some users reported success in reducing their reliance by turning to alternative activities such as writing, gaming, drawing or other hobbies. For those who formed emotional attachments to chatbots, building real-world relationships helped reduce dependence the most.
The researchers say design changes—such as reminders within the chat that the bot is not human—could help. AI literacy is also crucial.
“Some users don’t know that AI chatbots are not real because they’re so convincing,” said Shen. “If chatbots start replacing sleep, relationships or daily routines, that’s a sign to pause and check in—with yourself or someone you trust.”
DOI
A faster way to estimate AI power consumption
The “EnergAIzer” method generates reliable results in seconds, enabling data center operators to efficiently allocate resources and reduce wasted energy.
Due to the explosive growth of artificial intelligence, it is estimated that data centers will consume up to 12 percent of total U.S. electricity by 2028, according to the Lawrence Berkeley National Laboratory. Improving data center energy efficiency is one way scientists are striving to make AI more sustainable.
Toward that goal, researchers from MIT and the MIT-IBM Watson AI Lab developed a rapid prediction tool that tells data center operators how much power will be consumed by running a particular AI workload on a certain processor or AI accelerator chip.
Their method produces reliable power estimates in a few seconds, unlike traditional modeling techniques that can take hours or even days to yield results. Moreover, their prediction tool can be applied to a wide range of hardware configurations — even emerging designs that haven’t been deployed yet.
Data center operators could use these estimates to effectively allocate limited resources across multiple AI models and processors, improving energy efficiency. In addition, this tool could allow algorithm developers and model providers to assess potential energy consumption of a new model before they deploy it.
“The AI sustainability challenge is a pressing question we have to answer. Because our estimation method is fast, convenient, and provides direct feedback, we hope it makes algorithm developers and data center operators more likely to think about reducing energy consumption,” says Kyungmi Lee, an MIT postdoc and lead author of a paper on this technique.
She is joined on the paper by Zhiye Song, an electrical engineering and computer science (EECS) graduate student; Eun Kyung Lee and Xin Zhang, research managers at IBM Research and the MIT-IBM Watson AI Lab; Tamar Eilam, IBM Fellow, chief scientist of sustainable computing at IBM Research, and a member of the MIT-IBM Watson AI Lab; and senior author Anantha P. Chandrakasan, MIT provost, Vannevar Bush Professor of Electrical Engineering and Computer Science, and a member of the MIT-IBM Watson AI Lab. The research is being presented this week at the IEEE International Symposium on Performance Analysis of Systems and Software.
Expediting energy estimation
Inside a data center, thousands of powerful graphics processing units (GPUs) perform operations to train and deploy AI models. The power consumption of a particular GPU will vary based on its configuration and the workload it is handling.
Many traditional methods used to predict energy consumption involve breaking a workload into individual steps and emulating how each module inside the GPU is being utilized one step at a time. But AI workloads like model training and data preprocessing are extremely large and can take hours or even days to simulate in this manner.
“As an operator, if I want to compare different algorithms or configurations to find the most energy-efficient manner to proceed, if a single emulation is going to take days, that is going to become very impractical,” Lee says.
To speed up the prediction process, the MIT researchers sought to use less-detailed information that could be estimated faster. They found that AI workloads often have many repeatable patterns. They could use these patterns to generate the information needed for reliable but quick power estimation.
In many cases, algorithm developers write programs to run as efficiently as possible on a GPU. For instance, they use well-structured optimizations to distribute the work across parallel processing cores and move chunks of data around in the most efficient manner.
“These optimizations that software developers use create a regular structure, and that is what we are trying to leverage,” explains Lee.
The researchers developed a lightweight estimation model, called EnergAIzer, that captures the power usage pattern of a GPU from those optimizations.
An accurate assessment
But while their estimation was fast, the researchers found that it didn’t take all energy costs into account. For instance, every time a GPU runs a program, there is a fixed energy cost required for setting up and configurating that program. Then each time the GPU runs an operation on a chunk of data, an additional energy cost must be paid.
Due to fluctuations in the hardware or conflicts in accessing or moving data, a GPU might not be able to use all available bandwidth, slowing operations down and drawing more energy over time.
To include these additional costs and variances, the researchers gathered real measurements from GPUs to generate correction terms they applied to their estimation model.
“This way, we can get a fast estimation that is also very accurate,” she says.
In the end, a user can provide their workload information, like the AI model they want to run and the number and length of user inputs to process, and EnergAIzer will output an energy consumption estimation in a matter of seconds.
The user can also change the GPU configuration or adjust the operating speed to see how such design choices impact the overall power consumption.
When the researchers tested EnergAIzer using real AI workload information from actual GPUs, it could estimate the power consumption with only about 8 percent error, which is comparable to traditional methods that can take hours to produce results.
Their method could also be used to predict the power consumption of future GPUs and emerging device configurations, as long as the hardware doesn’t change drastically in a short amount of time.
In the future, the researchers want to test EnergAIzer on the newest GPU configurations and scale the model up so it can be applied to many GPUs that are collaborating to run a workload.
“To really make an impact on sustainability, we need a tool that can provide a fast energy estimation solution across the stack, for hardware designers, data center operators, and algorithm developers, so they can all be more aware of power consumption. With this tool, we’ve taken one step toward that goal,” Lee says.
This research was funded, in part, by the MIT-IBM Watson AI Lab.
###
Written by Adam Zewe, MIT News
Paper: “EnergAIzer: Fast and Accurate GPU Power Estimation Framework for AI Workloads”
https://arxiv.org/pdf/2604.20105
Article Title
EnergAIzer: Fast and Accurate GPU Power Estimation Framework for AI Workloads”
MIT-based team releases first AI foundation model for Alzheimer's prevention
FINGERS-7B integrates lifestyle, clinical, genomic, and proteomic data from tens of thousands of at-risk individuals to discover multi-omic biomarkers for preclinical Alzheimer's
image:
Logo of the Fingerprint team
view moreCredit: The Fingerprint collaboration
Alzheimer’s disease is best addressed as early as possible, ideally before symptoms become apparent. To enable early, accurate risk prediction both for individuals and whole populations, a team of AI researchers, physicians, and scientists centered at MIT has released FINGERS-7B, the first AI foundation model built to make Alzheimer's preventable. The team will present the model at ICLR, one of the largest AI conferences, April 27th in Rio de Janeiro
FINGERS-7B integrates lifestyle, clinical, genomic, and proteomic data from tens of thousands of at-risk individuals to discover multi-omic biomarkers for preclinical Alzheimer's. On WW-FINGERS network datasets, it delivers 4× more accurate preclinical diagnosis and 130% better responder stratification than prior art. The model is open source and deployed in the AD Workbench.
The model is open source and is deployed in the AD Workbench, the secure cloud environment operated by the Alzheimer's Disease Data Initiative (ADDI) and used by Alzheimer's researchers worldwide.
FINGERPRINT pairs FINGERS-7B with AI agents that run automated multi-omic analyses. The model was trained on data from tens of thousands of people at risk for Alzheimer's, and learns jointly from lifestyle, clinical, biomarker, genomic, and proteomic signals. The novel concept is the multi-omic biomarker. Instead of reading one omics domain at a time, FINGERS-7B reads them together. That is what makes earlier and more accurate detection possible, where no single data source can.
"Each of us carries a biological fingerprint, basically a unique combination of signals that reveal disease risk and, if properly understood, could enable prevention and treatment of Alzheimer's disease," said Adrian Noriega, MIT-Novo Nordisk AI Fellow and FINGERPRINT co-lead with Arvid Gollwitzer, Broad Institute research scholar, who led the design and training of FINGERS-7B. "FINGERPRINT is a discovery acceleration engine composed of specialized agents and new foundation models that interpret these biological signals to help us find novel biomarkers, prevention interventions, and therapeutics."
FINGERS-7B has identified a set of novel diagnostic biomarkers for preclinical Alzheimer's, the stage that can precede memory symptoms by a decade or more. Those biomarkers enable 4× more accurate preclinical diagnosis and a 130% improvement in responder stratification over prior art. The model also produces personalized analyses: given an individual's data, it predicts risk, the likely time course of cognitive decline, and the effect of candidate interventions, from dietary change to therapeutics.
"Even as Alzheimer's research labs like ours have gained the capability to generate huge volumes of data, including genetic, epigenetic and proteomic profiles from human tissue samples, we've faced the challenge of truly integrating all of it to gain a comprehensive view of individuals' risk, prognosis and likely treatment response," said Li-Huei Tsai, Picower Professor and director of the Picower Institute for Learning and Memory at MIT. "Early on it became clear that FINGERPRINT would be a remarkable example of how AI could help."
The project builds on Professor Miia Kivipelto's landmark FINGER study in cognitively unimpaired but at-risk older adults, and on the global WW-FINGERS network it inspired. Those studies now span 40 countries and 30,000 participants, focused on risk factors and lifestyle interventions that can prevent disease onset. FINGERPRINT integrates their clinical and lifestyle data with biomarker, genomic, and proteomic datasets from collaborating labs and industry partners.
MIT's Aging Brain Initiative, which Tsai directs, seeded the effort last June with a $100,000 grant to Noriega and Giovanni Traverso, Professor of Mechanical Engineering. Within ten months the team trained FINGERS-7B, shipped the AD Workbench deployment, and opened the model for external use.
Model weights, training code, and evaluation pipelines are all public. Any research group can apply FINGERS-7B to its own cohort and contribute results back. Deployment in the AD Workbench puts the model directly in front of researchers and clinicians already working on Alzheimer's prevention, without asking them to move sensitive patient data or stand up new infrastructure.
Other members of FINGERPRINT include Tsai, Traverso, and Kivipelto. Industry partners include Alamar Biosciences and Novo Nordisk. Additional institutional partners include the Broad Institute, Yale University, Imperial College London, and the Brigham and Women's Hospital.
Even before its public release, FINGERPRINT became poised to make a global impact on Alzheimer's research. In February, the Davos Alzheimer's Collaborative and the FINGERS Brain Health Institute announced a partnership to employ FINGERPRINT to advance research on Alzheimer's prevention. A key goal of that partnership is to do so in a way that encompasses people all over the world, capturing the true diversity of the globe's population. The team was also a finalist selected from among about 200 teams to compete last month in Copenhagen for AI Insights Data Prize, sponsored by the ADDI and Gates Ventures.
"Someone was going to build the foundation model stack for Alzheimer's prevention," Gollwitzer said . "It should be open, and it should be now."
Method of Research
Computational simulation/modeling
Subject of Research
People
UC San Diego Health performs first west coast AI robotic spine surgery
New robotic system with artificial intelligence and advanced imaging set to improve spine surgery safety and outcomes
University of California - San Diego
image:
Joseph Osorio, MD, PhD, neurosurgeon at San Diego Health, stands beside the new AI-powered robotic spine surgery system in an operating room at Jacobs Medical Center.
view moreCredit: Leslie Aquinde, UC San Diego Health
UC San Diego Health is the first health system on the West Coast to perform spine surgery using a new robotic system with advanced imaging and guidance, a major step forward in surgical care. Joseph Osorio, MD, PhD, neurosurgeon at UC San Diego Health and chief of spine surgery for the Department of Neurological Surgery at University of California San Diego School of Medicine, was chosen to lead the launch because of his expertise in complex spine surgery and his long history of bringing innovative treatments to patients.
“This platform fundamentally changes how we think about spine surgery,” said Osorio, who is also an associate professor of neurological surgery at UC San Diego School of Medicine. “For the first time, we are bringing together artificial intelligence, data-driven alignment planning, patient-specific implants, navigation, and robotic screw delivery within a single system. That level of precision and coordination allows us to operate more efficiently while significantly enhancing safety for our patients.”
This new robotic system combines smart computer technology, customized implants, imaging, and robotic assistance to help surgeons operate with greater accuracy. The robot also provides a detailed 3D view of the patient’s spine, adding extra safety measures when placing implants.
“AI-driven planning and patient-specific implants enable personalized surgical plans to enhance patient functional outcomes,” said Alexander Khalessi, MD, MBA, chief innovation officer at UC San Diego Health and chair of the Department of Neurological Surgery at UC San Diego School of Medicine. “By combining these capabilities with intra-operative imaging, navigation and robotic workflow, surgeons can execute the procedure with precision, safety, and efficiency. Patients leave the operating room certain their surgeon’s technical goals were achieved and a smoother recovery ahead.”
UC San Diego Health surgeons expect the platform to improve results for patients undergoing spine fusions by increasing consistency and accuracy while tailoring spinal alignment to each patient’s unique anatomy. The technology also streamlines operating room workflows, helping reduce procedure time and support recovery.
“Our patients will directly benefit from this advancement, and our surgeons will have tools that match the complexity of the conditions we’re treating,” Osorio said.
With this launch, UC San Diego Health continues to advance academic medicine and surgical innovation, bringing the most brain and spine care technologies to patients across Southern California and beyond.
UC San Diego Health has been recognized as a national leader in neurosurgical modernization. The spine program, in conjunction with orthopedic surgery faculty partners, has earned accreditation from The Joint Commission for excellence in spine surgery, reflecting the health system's commitment to patient safety, quality outcomes, and evidence-based care.
In the 2025–26 U.S. News & World Report "Best Hospitals" rankings, the UC San Diego Health neurology and neurosurgery program was named among the top in the nation, highlighting dedication to research, technology, and interdisciplinary collaboration. The spine program brings together neurosurgeons, orthopedic surgeons, rehabilitation specialists, and pain management experts to provide comprehensive care for every patient, from non-surgical treatments to the most complex procedures.
Surgeons debate promise and limits of robotics in lung transplantation at ISHLT meeting
The expanding use of robotic technology in lung transplantation came under scrutiny at today’s 46th Annual Meeting and Scientific Sessions of the International Society for Heart and Lung Transplantation (ISHLT), where experts debated whether its clinical benefits justify the cost and complexity.
The debate featured Stephanie Chang, MD, a Thoracic and Transplant Surgeon at NYU Langone Health, arguing in favor of robotics, and Hermann Reichenspurner, MD, PhD, a retired Surgeon and pioneer in minimally invasive cardiothoracic surgery, presenting the counterpoint.
Robotic-Assisted Thoracic Surgery May Expand Patient Pool
Dr. Chang highlighted the potential of robotic-assisted surgery to improve recovery and expand access to transplantation.
“Robotic, minimally invasive approaches can reduce the physiologic stress of transplantation compared with traditional, large access incisions,” she said.
Dr. Chang noted that in lung transplantation, robotic techniques offer:
- smaller incisions and improved visualization
- less bleeding and fewer hemodynamic shifts
- potential reductions in kidney injury, pain, and hospital stays.
“As robotic techniques become faster and more widely adopted, more frail and older patients may become candidates for transplant,” she said.
In contrast, Dr. Reichenspurner emphasized that current evidence does not demonstrate superior patient outcomes with robotic approaches compared to established minimally invasive techniques.
“There is not a single comparative study showing a significant advantage of robotic systems in terms of survival, morbidity, or length of stay,” he said. “Outcomes are comparable, but not better.”
Dr. Reichenspurner, who has performed approximately 450 heart transplants and is a past president of ISHLT, was an early adopter of robotic and minimally invasive cardiac surgery in the late 1990s. He stressed that his position reflects experience, not resistance to innovation.
“This is not about being conservative,” he said. “It is about determining whether the added cost and complexity are justified by measurable benefit.”
Do Expenses Justify Use?
He pointed to several limitations of robotic systems, including:
- high upfront and maintenance costs
- limited patient access to centers offering robotic capabilities
- lack of randomized controlled trials to support international guideline adoption.
Dr. Reichenspurner also raised concerns that robotics may sometimes function more as a competitive marketing tool than a clinically necessary advancement. At the same time, Dr. Reichenspurner acknowledged specific advantages of robotic systems, including for surgical training.
“Surgical robots are more accurately described as tele-manipulators, surgeon-controlled systems that enhance precision but do not operate independently,” he said. “With these systems, both the trainee and the instructor can operate simultaneously, which is a clear benefit for education.”
The discussion also highlighted important distinctions in how robotics is applied across medical specialties. While robotic systems are widely used in thoracic procedures and fields such as urology and gynecology, their role in heart transplantation remains extremely limited.
“To date, robotic heart transplantation is essentially nonexistent,” Dr. Reichenspurner noted. “For cardiac transplantation, a large incision is still required anyway, which limits the use of robotics.”
The Need for Controlled, Randomized Trials
While both speakers agreed that the use of robotics in lung transplantation is likely to grow, particularly in centers that already use the technology for other thoracic procedures, widespread adoption will likely depend on stronger clinical evidence.
“For the use of robotics to become part of formal guidelines, we need randomized trials comparing its outcomes to minimally invasive surgery,” said Dr. Reichenspurner.
The annual meeting and scientific sessions of the ISHLT are being held from 22–25 April at the Metro Toronto Convention Centre in Toronto, ON, Canada.
END
ABOUT ISHLT
The International Society for Heart and Lung Transplantation (ISHLT) is a not-for-profit, multidisciplinary, professional organization dedicated to improving the care of patients with advanced heart or lung disease through transplantation, mechanical support, and innovative therapies via research, education, and advocacy. ISHLT members focus on transplantation and a range of interventions and therapies related to advanced heart and lung disease.

No comments:
Post a Comment