What is the impact of predictive AI in the health care setting?
Findings underscore the need to track individuals affected by machine learning predictions
Models built on machine learning in health care can be victims of their own success, according to researchers at the Icahn School of Medicine and the University of Michigan. Their study assessed the impact of implementing predictive models on the subsequent performance of those and other models. Their findings—that using the models to adjust how care is delivered can alter the baseline assumptions that the models were “trained” on, often for worse—were detailed in the October 9 online issue of Annals of Internal Medicine: https://www.acpjournals.org/doi/10.7326/M23-0949.
“We wanted to explore what happens when a machine learning model is deployed in a hospital and allowed to influence physician decisions for the overall benefit of patients,” says first and corresponding author Akhil Vaid, M.D., Clinical Instructor of Data-Driven and Digital Medicine (D3M), part of the Department of Medicine at Icahn Mount Sinai. “For example, we sought to understand the broader consequences when a patient is spared from adverse outcomes like kidney damage or mortality. AI models possess the capacity to learn and establish correlations between incoming patient data and corresponding outcomes, but use of these models, by definition, can alter these relationships. Problems arise when these altered relationships are captured back into medical records.”
The study simulated critical care scenarios at two major health care institutions, the Mount Sinai Health System in New York and Beth Israel Deaconess Medical Center in Boston, analyzing 130,000 critical care admissions. The researchers investigated three key scenarios:
- Model retraining after initial use
Current practice suggests retraining models to address performance degradation over time. Retraining can improve performance initially by adapting to changing conditions, but the Mount Sinai study shows it can paradoxically lead to further degradation by disrupting the learned relationships between presentation and outcome.
2. Creating a new model after one has already been in use
Following a model’s predictions can save patients from adverse outcomes such as sepsis. However, death may follow sepsis, and the model effectively works to prevent both. Any new models developed in the future for prediction of death will now also be subject to upset relationships as before. Since we do not know the exact relationships between all possible outcomes, any data from patients with machine-learning influenced care may be inappropriate to use in training further models.
3. Concurrent use of two predictive models
If two models make simultaneous predictions, using one set of predictions renders the other obsolete. Therefore, predictions should be based on freshly gathered data, which can be costly or impractical.
“Our findings reinforce the complexities and challenges of maintaining predictive model performance in active clinical use,” says co-senior author Karandeep Singh, MD, Associate Professor of Learning Health Sciences, Internal Medicine, Urology, and Information at the University of Michigan. “Model performance can fall dramatically if patient populations change in their makeup. However, agreed-upon corrective measures may fall apart completely if we do not pay attention to what the models are doing—or more properly, what they are learning from.”
“We should not view predictive models as unreliable,” says co-senior author Girish Nadkarni, M.D., MPH, Irene and Dr. Arthur M. Fishberg Professor of Medicine at Icahn Mount Sinai, Director of The Charles Bronfman Institute of Personalized Medicine, and System Chief of Data-Driven and Digital Medicine. “Instead, it's about recognizing that these tools require regular maintenance, understanding, and contextualization. Neglecting their performance and impact monitoring can undermine their effectiveness. We must use predictive models thoughtfully, just like any other medical tool. Learning health systems must pay heed to the fact that indiscriminate use of, and updates to, such models will cause false alarms, unnecessary testing, and increased costs.”
“We recommend that health systems promptly implement a system to track individuals impacted by machine learning predictions, and that the relevant governmental agencies issue guidelines,” says Dr. Vaid. “These findings are equally applicable outside of health care settings and extend to predictive models in general. As such, we live in a model-eat-model world where any naively deployed model can disrupt the function of current and future models, and eventually render itself useless.”
The paper is titled “Implications of the Use of Artificial Intelligence Predictive Models in Health Care Settings: A Simulation Study.”
The remaining authors are Ashwin Sawant, M.D.; Mayte Suarez-Farinas, Ph.D.; Juhee Lee, M.D.; Sanjeev Kaul, M.D.; Patricia Kovatch, BS; Robert Freeman, RN; Joy Jiang, BS; Pushkala Jayaraman, MS; Zahi Fayad, Ph.D.; Edgar Argulian, M.D.; Stamatios Lerakis, M.D.; Alexander W Charney, M.D., Ph.D.; Fei Wang, Ph.D.; Matthew Levin, M.D., Ph.D.; Benjamin Glicksberg, Ph.D.; Jagat Narula, M.D., Ph.D.; and Ira Hofer, M.D.
The work was supported by Clinical and translational award for infrastructure UL1TR004419.
-####-
About the Icahn School of Medicine at Mount Sinai
The Icahn School of Medicine at Mount Sinai is internationally renowned for its outstanding research, educational, and clinical care programs. It is the sole academic partner for the eight- member hospitals* of the Mount Sinai Health System, one of the largest academic health systems in the United States, providing care to a large and diverse patient population.
Ranked 14th nationwide in National Institutes of Health (NIH) funding and among the 99th percentile in research dollars per investigator according to the Association of American Medical Colleges, Icahn Mount Sinai has a talented, productive, and successful faculty. More than 3,000 full-time scientists, educators, and clinicians work within and across 44 academic departments and 36 multidisciplinary institutes, a structure that facilitates tremendous collaboration and synergy. Our emphasis on translational research and therapeutics is evident in such diverse areas as genomics/big data, virology, neuroscience, cardiology, geriatrics, as well as gastrointestinal and liver diseases.
Icahn Mount Sinai offers highly competitive MD, PhD, and Master’s degree programs, with current enrollment of approximately 1,300 students. It has the largest graduate medical education program in the country, with more than 2,000 clinical residents and fellows training throughout the Health System. In addition, more than 550 postdoctoral research fellows are in training within the Health System.
A culture of innovation and discovery permeates every Icahn Mount Sinai program. Mount Sinai’s technology transfer office, one of the largest in the country, partners with faculty and trainees to pursue optimal commercialization of intellectual property to ensure that Mount Sinai discoveries and innovations translate into healthcare products and services that benefit the public.
Icahn Mount Sinai’s commitment to breakthrough science and clinical care is enhanced by academic affiliations that supplement and complement the School’s programs.
Through the Mount Sinai Innovation Partners (MSIP), the Health System facilitates the real-world application and commercialization of medical breakthroughs made at Mount Sinai. Additionally, MSIP develops research partnerships with industry leaders such as Merck & Co., AstraZeneca, Novo Nordisk, and others.
The Icahn School of Medicine at Mount Sinai is located in New York City on the border between the Upper East Side and East Harlem, and classroom teaching takes place on a campus facing Central Park. Icahn Mount Sinai’s location offers many opportunities to interact with and care for diverse communities. Learning extends well beyond the borders of our physical campus, to the eight hospitals of the Mount Sinai Health System, our academic affiliates, and globally.
-------------------------------------------------------
* Mount Sinai Health System member hospitals: The Mount Sinai Hospital; Mount Sinai Beth Israel; Mount Sinai Brooklyn; Mount Sinai Morningside; Mount Sinai Queens; Mount Sinai South Nassau; Mount Sinai West; and New York Eye and Ear Infirmary of Mount Sinai.
JOURNAL
Annals of Internal Medicine
METHOD OF RESEARCH
Computational simulation/modeling
SUBJECT OF RESEARCH
Not applicable
ARTICLE TITLE
Implications of the Use of Artificial Intelligence Predictive Models in Health Care Settings: A Simulation Study
ARTICLE PUBLICATION DATE
9-Oct-2023
AI language models could help diagnose schizophrenia
Peer reviewed | Experimental study | People
Peer-Reviewed PublicationScientists at the UCL Institute for Neurology have developed new tools, based on AI language models, that can characterise subtle signatures in the speech of patients diagnosed with schizophrenia.
The research, published in PNAS, aims to understand how the automated analysis of language could help doctors and scientists diagnose and assess psychiatric conditions.
Currently, psychiatric diagnosis is based almost entirely on talking with patients and those close to them, with only a minimal role for tests such as blood tests and brain scans.
However, this lack of precision prevents a richer understanding of the causes of mental illness, and the monitoring of treatment.
The researchers asked 26 participants with schizophrenia and 26 control participants to complete two verbal fluency tasks, where they were asked to name as many words as they could either belonging to the category “animals” or starting with the letter “p”, in five minutes.
To analyse the answers given by participants, the team used an AI language model that had been trained on vast amounts of internet text to represent the meaning of words in a similar way to humans. They tested whether the words people spontaneously recalled could be predicted by the AI model, and whether this predictability was reduced in patients with schizophrenia.
They found that the answers given by control participants were indeed more predictable by the AI model than those generated by people with schizophrenia, and that this difference was largest in patients with more severe symptoms.
The researchers think that this difference might have to do with the way the brain learns relationships between memories and ideas, and stores this information in so called ‘cognitive maps’. They find support for this theory in a second part of the same study where the authors used brain scanning to measure brain activity in parts of the brain involved in learning and storing these ‘cognitive maps’.
Lead author, Dr Matthew Nour (UCL Queen Square Institute of Neurology and University of Oxford), said: “Until very recently, the automatic analysis of language has been out of reach of doctors and scientists. However, with the advent of artificial intelligence (AI) language models such as ChatGPT, this situation is changing.
“This work shows the potential of applying AI language models to psychiatry – a medical field intimately related to language and meaning.”
Schizophrenia is a debilitating and common psychiatric disorder that affects around 24 million people worldwide and over 685,000 people in the UK.
According to the NHS, symptoms of the condition may include hallucinations, delusions, confused thoughts and changes in behaviour.
The team from UCL and Oxford now plan to use this technology in a larger sample of patients, across more diverse speech setting, to test whether it might prove useful in the clinic.
Dr Nour said: “We are entering a very exciting time in neuroscience and mental health research. By combining state-of-the-art AI language models and brain scanning technology, we are beginning to uncover how meaning is constructed in the brain, and how this might go awry in psychiatric disorders. There is enormous interest in using AI language models in medicine. If these tools prove safe and robust, I expect they will begin to be deployed in the clinic within the next decade.”
The study was funded by Wellcome.
JOURNAL
Proceedings of the National Academy of Sciences
METHOD OF RESEARCH
Experimental study
SUBJECT OF RESEARCH
People
ARTICLE TITLE
Trajectories through semantic spaces in schizophrenia and the relationship to ripple bursts
ARTICLE PUBLICATION DATE
9-Oct-2023
No comments:
Post a Comment