New center to develop AI-based imaging tools to improve diagnosis, care
WashU Medicine Mallinckrodt Institute of Radiology leads effort on image-based precision medicine
WashU Medicine
image:
The WashU Medicine Mallinckrodt Institute of Radiology is establishing the Center for Computational and AI-enabled Imaging Sciences in partnership with WashU’s McKelvey School of Engineering. The new center is dedicated to developing AI-based imaging tools to improve the diagnosis and precision treatment of numerous medical conditions.
view moreCredit: WashU Medicine
Mallinckrodt Institute of Radiology (MIR) at Washington University School of Medicine in St. Louis is establishing a new center dedicated to developing AI-based imaging tools to improve the diagnosis and precision treatment of cancers, cardiovascular disease, neurological diseases and numerous other conditions. The new Center for Computational and AI-enabled Imaging Sciences brings together collaborators from across WashU Medicine and others from WashU’s McKelvey School of Engineering.
AI already has shown promise for its ability to analyze vast collections of medical images to generate clinically relevant insights, identifying patterns and anomalies that physicians might otherwise not detect on their own.
“Mallinckrodt Institute of Radiology has long been a national leader in developing innovative imaging technologies, from the invention of positron emission tomography to today’s AI applications in diagnostics and image analysis, and this new center represents an ambitious expansion of our capability,” said Pamela K. Woodard, MD, the Elizabeth E. Mallinckrodt Professor and head of MIR at WashU Medicine. “Integrating AI into imaging will enhance how we diagnose disease, predict its progression and tailor treatments to the unique needs of each patient.”
The new center will help advance AI-driven imaging technologies, such as two recently developed at WashU Medicine — in collaboration with MIR — that are being commercialized. One tool can analyze mammograms to predict an individual patient’s risk of breast cancer over the next five years. Another rapidly maps the brain to help neurosurgeons plan delicate surgeries and avoid sensitive areas that control speech, movement and cognitive function. The center will be a hub for expertise in image analysis that uses sophisticated computing tools to find patterns in datasets of millions of medical images and de-identified patient records, providing insight on both the progression and the potential treatment of disease. The center will also support training on these tools for clinicians and researchers.
The new center will join a growing WashU ecosystem of collaborative AI initiatives that are helping to shape the future of medicine. These include the Center for Health AI (CHAI), which was established as part of the joint agreement to build deeper collaboration between BJC Health System and WashU Medicine and is focused on making health care more personalized and effective for patients and more efficient for providers; and the AI for Health Institute at WashU McKelvey Engineering, which is working on other AI-powered medical innovations.
The Center for Computational and AI-enabled Imaging Sciences will primarily focus on developing AI-based medical imaging applications that integrate information from different imaging types — ranging from digital microscope images of cells to MRI scans to X-rays — to identify clinically informative connections between them. This may include identifying previously unknown early indicators of disease onset that could allow for more effective clinical interventions.
The center will bring together AI imaging experts and researchers from across the Medical Campus, including Siteman Cancer Center, based at Barnes-Jewish Hospital and WashU Medicine, and from the school’s Departments of Medicine, of Neurology, of Psychiatry and of Radiation Oncology.
A clear image of the future of medicine
The new center will house information from the imaging databases of all the participating departments, collectively representing a range of imaging modalities across many different types of disease. The AI-powered tools developed from those large datasets will enable increasingly precise diagnosis for individual patients, Woodard said.
AI algorithms applied to medical imaging have already been used to detect and classify new subtypes of some disorders in ways that can guide clinical treatment decisions. The breadth of information that will be available at the new center will accelerate this work in a broader range of conditions.
The new center will be led by Mark Anastasio, PhD, a leading expert in computational imaging science and AI for imaging applications. He joins WashU as the Mallinckrodt Endowed Professor of Imaging Sciences for MIR, where he will also be the Vice Chair for Imaging Sciences and AI Research. He will also be Professor of Electrical & Systems Engineering in McKelvey Engineering. Anastasio comes to WashU from the University of Illinois Urbana-Champaign, where he has served as head of the Department of Bioengineering for the past six years.
“Institutions with leading academic medical centers that unite medical data, clinical expertise and advanced AI research will lead the next revolution in healthcare,” said Anastasio. “WashU is exactly such an institution and an ideal home for this center that will enable us to build a community to drive innovation that advances patient care in ways few other institutions can achieve.”
As part of that community building, Anastasio will join the leadership team of the Oncologic Imaging Program at Siteman Cancer Center. He will also be the associate Chief Research Information Officer for Biomedical Imaging at the Institute for Informatics, Data Science & Biostatistics (I2DB), where he will work with institute director Philip R.O. Payne, PhD, the Janet and Bernard Becker Professor of Medicine. Payne is also the chief health AI officer for CHAI and the Vice Chancellor for Biomedical Informatics and Data Science at WashU Medicine.
“AI-enabled imaging has the potential to be as transformative for medicine as earlier waves of innovation — from the adoption of electronic health records to the rise of precision medicine and the advent of real-world evidence generation,” said Payne. “That transformation is being realized here at WashU Medicine because of the dynamic and collaborative environment that exists at our institution, exemplified by leading-edge, transdisciplinary initiatives like this one.”
Aaron Bobick, PhD, dean of WashU McKelvey Engineering and the James M. McKelvey Professor, said dedicated centers such as this will be crucial to maximizing the medical and engineering expertise needed to build out the potential for AI in medical applications.
“Medical imaging offers some of the most exciting challenges in imaging science and artificial intelligence, both of which are core domains for McKelvey Engineering,” said Bobick. “I am certain that the innovations that this center will facilitate by combining the skills of WashU Engineering faculty with the broad range of medical expertise at WashU Medicine will lead to advances that both drive the science forward and benefit patients.”
About WashU Medicine
WashU Medicine is a global leader in academic medicine, including biomedical research, patient care and educational programs with more than 3,000 faculty. Its National Institutes of Health (NIH) research funding portfolio is the second largest among U.S. medical schools and has grown 83% since 2016. Together with institutional investment, WashU Medicine commits well over $1 billion annually to basic and clinical research innovation and training. Its faculty practice is consistently among the top five in the country, with more than 2,000 faculty physicians practicing at 130 locations. WashU Medicine physicians exclusively staff Barnes-Jewish and St. Louis Children’s hospitals — the academic hospitals of BJC HealthCare — and Siteman Cancer Center, a partnership between BJC HealthCare and WashU Medicine and the only National Cancer Institute-designated comprehensive cancer center in Missouri. WashU Medicine physicians also treat patients at BJC’s community hospitals in our region. With a storied history in MD/PhD training, WashU Medicine recently dedicated $100 million to scholarships and curriculum renewal for its medical students, and is home to top-notch training programs in every medical subspecialty as well as physical therapy, occupational therapy, and audiology and communications sciences.
Subject of Research
People
How can computer science educators teach students to calibrate their trust in GenAI programming tools?
Study shows short-term increase in student trust for generative AI programming tools; long-term trust still unclear. Researchers weigh in on what this means for computer science educators.
image:
A screenshot of the type of code students worked on during the study.
view moreCredit: University of California San Diego
How much do undergraduate computer science students trust chatbots powered by large language models like GitHub CoPilot and ChatGPT? And how should computer science educators modify their teaching based on these levels of trust?
These were the questions that a group of U.S. computer scientists set out to answer in a study that will be presented at the Koli Calling conference Nov. 11 to 16 in Finland. In the course of the study’s few weeks, researchers found that trust in generative AI tools increased in the short run for a majority of students. But in the long run, students said they realized they needed to be competent programmers without the help of AI tools. This is because these tools would often generate incorrect code or would not help students with code comprehension tasks.
The study was motivated by the dramatic change in the skills required from undergraduate computer science students since the advent of generative AI tools that can create code from scratch. “Computer science and programming is changing immensely,” said Gerald Soosairaj, one of the paper’s senior authors and an associate teaching professor in the Department of Computer Science and Engineering at the University of California San Diego.
Today, students are tempted to overly rely on chatbots to generate code and as a result might not learn the basics of programming, researchers said. These tools also might generate code that is incorrect or vulnerable to cybersecurity attacks. Conversely, students who refuse to use chatbots miss out on the opportunity to program faster and be more productive. But once they graduate, computer science students will most likely use generative AI tools in their day-to-day, and need to be able to do so effectively. This means they will still need to have a solid understanding of the fundamentals of computing and how programs work, so they can evaluate the AI-generated code they will be working with, researchers said.
“We found that student trust, on average, increased as they used GitHub Copilot throughout the study. But after completing the second part of the study–a more elaborate project–students felt that using Copilot to its full extent requires a competent programmer that can complete some tasks manually,” said Soosairaj.
The study surveyed 71 junior and senior computer science students, half of whom had never used Github CoPilot. After an 80-minute class where researchers explained how GitHub CoPilot works and had students use the tool, half of the students said their trust in the tool had increased, while about 17% said it had decreased. Students then took part in a 10-day long project where they worked on a large open-source codebase using GitHub Copilot throughout the project to add a small new functionality to the codebase. At the end of the project, about 39% of students said their trust in Copilot had increased. But about 37% said their trust in Copilot had decreased somewhat while about 24% said it had not changed.
The results of this study have important implications for how computer science educators should approach the introduction of AI assistants in introductory and advanced courses. Researchers make a series of recommendations for computer science educators in an undergraduate setting.
To help students calibrate their trust and expectations of AI assistants, computer science educators should provide opportunities for students to use AI programming assistants for tasks with a range of difficulty, including tasks within large codebases.
To help students determine how much they can trust AI assistants’ output, computer science educators should ensure that students can still comprehend, modify, debug, and test code in large codebases without AI assistants.
Computer science educators should ensure that students are aware of how AI assistants generate output via natural language processing so that students understand the AI assistants’ expected behavior.
Computer science educators should explicitly inform and demonstrate key features of AI assistants that are useful for contributing to a large code base, such as adding files as context while using the ‘explain code’ feature and using keywords such as “/explain”, “/fix”, and “/docs” in GitHub Copilot.
“CS educators should be mindful that how we present and discuss AI assistants can impact how students perceive such assistants,” the researchers write.
Researchers plan to repeat their experiment and survey with a larger pool of 200 students this winter quarter.
Evolution of Programmers’ Trust in Generative AI Programming Assistants
Anshul Shah, Elena Tomson, Leo Porter, William G. Griswold, and Adalbert Gerald Soosai Raj. Department of Computer Science and Engineering, University of California San Diego
Thomas Rexin, North Carolina State University
Method of Research
Experimental study
Subject of Research
People
Article Title
Evolution of Programmers’ Trust in Generative AI Programming Assistants
Article Publication Date
11-Nov-2025
Software developers show less constructive scepticism when using AI assistants than when working with human colleagues
When writing program code, software developers often work in pairs—a practice that reduces errors and encourages knowledge sharing. Increasingly, AI assistants are now being used for this role. But this shift in working practice isn’t without its drawbacks, as a new empirical study by computer scientists in Saarbrücken reveals. Developers tend to scrutinize AI-generated code less critically and they learn less from it. These findings will be presented at a major scientific conference in Seoul.
When two software developers collaborate on a programming project—known in technical circles as 'pair programming'—it tends to yield a significant improvement in the quality of the resulting software. ‘Developers can often inspire one another and help avoid problematic solutions. They can also share their expertise, thus ensuring that more people in their organization are familiar with the codebase,’ explains Sven Apel, professor of computer science at Saarland University. Together with his team, Apel has examined whether this collaborative approach works equally well when one of the partners is an AI assistant. In the study, 19 students with programming experience were divided into pairs: six worked with a human partner, while seven collaborated with an AI assistant. The methodology for measuring knowledge transfer was developed by Niklas Schneider as part of his Bachelor’s thesis.
For the study, the researchers used GitHub Copilot, an AI-powered coding assistant introduced by Microsoft in 2021, which, like similar products from other companies, has now been widely adopted by software developers. These tools have significantly changed how software is written. 'It enables faster development and the generation of large volumes of code in a short time. But this also makes it easier for mistakes to creep in unnoticed, with consequences that may only surface later on,' says Sven Apel. The team wanted to understand which aspects of human collaboration enhance programming and whether these can be replicated in human-AI pairings. Participants were tasked with developing algorithms and integrating them into a shared project environment.
'Knowledge transfer is a key part of pair programming,' Apel explains. 'Developers will continuously discuss current problems and work together to find solutions. This does not involve simply asking and answering questions, it also means that the developers share effective programming strategies and volunteer their own insights.' According to the study, such exchanges also occurred in the AI-assisted teams—but the interactions were less intense and covered a narrower range of topics. 'In many cases, the focus was solely on the code,' says Apel. 'By contrast, human programmers working together were more likely to digress and engage in broader discussions and were less focused on the immediate task.
One finding particularly surprised the research team: ‘The programmers who were working with an AI assistant were more likely to accept AI-generated suggestions without critical evaluation. They assumed the code would work as intended,’ says Apel. ‘The human pairs, in contrast, were much more likely to ask critical questions and were more inclined to carefully examine each other’s contributions,' explains Apel. He believes this tendency to trust AI more readily than human colleagues may extend to other domains as well. ‘I think it has to do with a certain degree of complacency—a tendency to assume the AI’s output is probably good enough, even though we know AI assistants can also make mistakes.’ Apel warns that this uncritical reliance on AI could lead to the accumulation of 'technical debt’, which can be thought of as the hidden costs of the future work needed to correct these mistakes, thereby complicating the future development of the software.
For Apel, the study highlights the fact that AI assistants are not yet capable of replicating the richness of human collaboration in software development. ‘They are certainly useful for simple, repetitive tasks,’ says Apel. ‘But for more complex problems, knowledge exchange is essential—and that currently works best between humans, possibly with AI assistants as supporting tools.' Apel emphasizes the need for further research into how humans and AI can collaborate effectively while still retaining the kind of critical eye that characterizes human collaboration.
Alisa Welter, a PhD student in Apel’s group and first author of the article, will present the findings at the 40th IEEE/ACM International Conference on Automated Software Engineering—one of the top three conferences in the field. The conference will take place from November 16 to 20 in Seoul, South Korea. Out of the approximately 1,200 papers submitted to the conference, only 150 were accepted for presentation. The study was funded by the European Union through the ERC Advanced Grant ‘Brains On Code’ (see press release from April 26, 2022): https://saarland-informatics-campus.de/piece-of-news/brains-on-code/
Further information:
Empirical study: https://www.se.cs.uni-saarland.de/publications/docs/WSD+.pdf
40th IEEE/ACM International Conference on Automated Software Engineering:
https://conf.researchr.org/home/ase-2025 with a brief abstract of the paper from the conference website
Software Engineering research group at Saarland University: https://www.se.cs.uni-saarland.de
Method of Research
Observational study
Subject of Research
People
Article Title
An Empirical Study of Knowledge Transfer in AI Pair Programming
Article Publication Date
16-Nov-2025
No comments:
Post a Comment