Saturday, September 28, 2024

 

Artificial intelligence may enhance patient safety, say BU researchers


Study marks an important first step towards leveraging AI technology to reduce preventable harms, achieve better healthcare outcomes




Boston University School of Medicine




(Boston) — Generative artificial intelligence (genAI) uses hundreds of millions, sometimes billions, of data points to train itself to produce realistic and innovative outputs that can mimic human-created content. Its applications include personalized recommendations for online shoppers, creating audio and visual content and accelerating engineering design. In healthcare, possible genAI uses include enhancing imaging technologies, predicting the course of a disease in an individual patient and discovering new vaccines.

BU researchers tested an advanced publicly available genAI model, GPT-4, to determine its ability to answer questions across five key areas of patient safety in the 50-question self-assessment for the Certified Professional in Patient Safety (CPPS) exam, a standardized multiple-choice certification exam for patient safety professionals. GPT-4 answered 88% of the questions correctly, demonstrating a high level of performance.

“While other studies have looked at genAI's performance on exams from different healthcare specialties over the past year, ours is the first robust test of its proficiency specifically in patient safety,” said corresponding author Nicholas Cordella, MD, MSc, assistant professor of medicine at BU Chobanian & Avedisian School of Medicine. 

James Moses, MD, MPH, formerly an associate professor of pediatrics at the school and now chief  of quality, safety and patient experience at Corewell Health in Michigan, is a co-author of the study.

The researchers presented questions from the CPPS self-assessment exam to the GPT-4 model without any additional training or medical fine-tuning. They then evaluated the model's performance across various exam categories. They found GPT-4 performed particularly well in the domains of Patient Safety and Solutions, Measuring and Improving Performance, and Systems Thinking and Design/Human Factors. Based on the strength of those results, the researchers outlined areas where patient safety professionals could begin to conduct more testing of the real-world strengths and weaknesses of AI.

“Our findings suggest that AI could help doctors better recognize, address and prevent mistakes in hospitals and clinics. While more research is needed to fully understand what current AI can do in patient safety, this study shows that AI has some potential to improve healthcare by assisting clinicians in addressing preventable harms,” said Cordella who also is medical director for quality and patient safety at Boston Medical Center.

He believed the use of AI holds promise for improving patient safety systems and better tackling the intractable problem of medical errors, which are estimated to cause approximately 400,000 deaths every year.

Cordella said the study aligns with the broader idea that AI can help professionals, including doctors, enhance their work. By using AI to support their tasks, clinicians may be able to improve the safety and efficiency of healthcare, similar to how other knowledge workers are adapting AI to boost their performance.

The study also revealed limitations in current AI technology and cautioned that users must remain vigilant for bias, false confidence, fabricated data or hallucinations in large language model (like GPT-4) responses.

"Our findings suggest that AI has the potential to significantly enhance patient safety, marking an enabling step towards leveraging this technology to reduce preventable harms and achieve better healthcare outcomes. However, it's important to recognize this as an initial step, and we must rigorously test and refine AI applications to truly benefit patient care," said Cordella.

These findings appear online in the Joint Commission Journal on Quality and Patient Safety.

No comments: