AI can find cancer pathologists miss
image:
Carolina Wählby, Professor of Quantitative Microscopy at the Department of Information Technology and SciLifeLab
view moreCredit: Mikael Wallerstedt
Men assessed as healthy after a pathologist analyses their tissue sample may still have an early form of prostate cancer. Using AI, researchers at Uppsala University have been able to find subtle tissue changes that allow the cancer to be detected long before it becomes visible to the human eye.
Previous research has demonstrated that AI is able to detect tissue changes indicative of cancer. In the current study, published in Scientific Reports, the researchers show that AI can also find cancers missed by pathologists.
“The study has been nicknamed the ‘missed study’, as the goal of finding the cancer was ‘missed’ by the pathologists. We have now shown that with the help of AI, it is possible to find signs of prostate cancer that were not observed by pathologists in more than 80 per cent of samples from men who later developed cancer,” says Carolina Wählby, who led the AI development in the study.
The project is based on a collaboration with Umeå University, where the researchers collected samples from men called for sample-taking over a number of years. All 232 men in the study were assessed as healthy when their biopsies were examined by pathologists. After less than two-and-a-half years, half of the men in the study had developed aggressive prostate cancer, while the rest were still cancer-free eight years later.
AI trained to detect signs of cancer
As all tissue samples were initially assessed as negative, the researchers developed a new way to train the AI tool. It was trained by analysing each biopsy image bit by bit, with the assumption that abnormal patterns ought to be present somewhere in the biopsies that came from patients who later developed aggressive cancer, while the other images should not contain such patterns. The AI was then tested on an independent set of images.
“When we looked at the patterns that the AI ranked as informative, we saw changes in the tissue surrounding the glands in the prostate – changes also observed in other studies. This shows that AI analysis of routine biopsies can detect subtle signs indicating clinically significant prostate cancer before it becomes obvious to a pathologist,” says Wählby.
The researchers suggest that this type of analysis could be used to decide how soon men who have been assessed as healthy should be followed up. The imaging data collected and the researchers’ methods are openly available for further research and development.
The image shows an original biopsy (left) and a colour-coded biopsy (right). The warm colours in the colour-coded image show abnormal patterns indicating cancer.
Credit
Carolina Wählby
Journal
Scientific Reports
Method of Research
Imaging analysis
Subject of Research
Human tissue samples
Article Title
Discovery of tumour indicating morphological changes in benign prostate biopsies through AI
Article Publication Date
21-Aug-2025
Scientists train deep-learning models to scrutinize biopsies like a human pathologist
MedSight AI Research Lab
image:
After training on pathologists’ slide-reviewing data, the PEAN model is capable of performing a multiclassification task and imitating the pathologists’ slide-reviewing behaviors (see Panel a). The data distribution of the training dataset, internal testing dataset and external testing dataset are illustrated in Panel b, and the color legend representing various diseases applies to Panels c and d. The total number of patients with different skin conditions in the dataset are listed in Panel c. The quantity of slide-reviewing operations performed by the different pathologists is illustrated in Panel d. The “Overlap” column includes the images listed for each pathologist. Panel e depicts regions of interest as heatmaps (second row) in which the pathologist’s gaze highly overlaps with the actual tumor tissue, marked in blue in the first row.
view moreCredit: Tianhang Nan, Northeastern University, China
In the Age of AI, many healthcare providers dream of a digital assistant, unencumbered by fatigue, workload, burnout or hunger, that could provide a quick second opinion for medical decisions, including diagnoses, treatment plans and prescriptions.
Today, the computing power and AI know-how are available to develop such assistants. However, replicating the expertise of a specially trained, highly experienced pathologist, radiologist or another specialist isn’t easy or straightforward. AI algorithms, in particular, require vast amounts of data to create highly accurate models. And the more high-quality data, the better.
For pathologists in particular, a method called pixel-wise manual annotation can be used with great success to train AI models to accurately diagnose specific diseases from tissue biopsy images. This method, however, requires a trained pathologist to annotate every pixel in a tissue biopsy image, outlining regions of interest for machine learning model training. The annotation burden for pathologists in this case is obvious and limits the amount of quality data that can be created for model training, thereby limiting the diagnostic precision of the eventual model.
To address this challenge, a team of researchers led by scientists from the MedSight AI Research Lab, The First Hospital of China Medical University and the National Joint Engineering Research Center for Theranostics of Immunological Skin Diseases in Shenyang, China developed a method to annotate biopsy image data with eye-tracking devices, significantly reducing the burden of manually annotating every pixel of interest in a tissue biopsy image.
The researchers published their study in Nature Communications on July 1.
“To obtain pathologists’expertise with minimal pathologist workload, … we collect[ed] the image review patterns of pathologists [using] eye-tracking devices. Simultaneously, we design[ed] a deep learning system, Pathology Expertise Acquisition Network (PEAN), based on the collected visual patterns, which can decode pathologists’ expertise [and] diagnose [whole slide images],” said Xiaoyu Cui, associate professor at the MedSight AI Research Lab in the College of Medicine and Biological Information Engineering at Northeastern University and senior author of the research paper.
Specifically, the team hypothesized that the visual data obtained with eye-tracking devices while pathologists review tissue biopsy images can teach an AI model which areas are of particular interest in a biopsy image, providing a much less burdensome alternative to pixel-wise annotation. In this way, the team hoped to extract the pathologists’ expertise in a much less labor-intensive way and generate much more data to develop and train more accurate deep learning-assisted diagnostic models.
To achieve this, the team collected the slide-reviewing data from pathologists using custom-developed software and an eye-tracking device that reported the pathologists’eye movements, zooming and panning of whole-slide tissue images and the diagnoses for each sample. A total of 5,881 tissue samples encompassing five different types of skin lesions were reviewed.
The PEAN system computes the “expertise values” for all areas in a tissue sample by simulating the pathologist’s regions of interest by comparing the eye-tracking data to manual pixel annotation data of the same tissue biopsy images. With this training data, PEAN models could predict the suspicious regions of each biopsy image to imitate pathologists’expertise (PEAN-I) or train models to classify tissue sample diagnoses (PEAN-C).
Remarkably, PEAN-C achieved an accuracy of 96.3% and an area under the curve (AUC) of 0.992, which measures how well a model can distinguish between positive and negative samples, when classifying samples it had been trained with and an accuracy of 93.0% and an AUC of 0.984 on tissue samples the system hadn’t been trained on. PEAN-C managed to surpass the accuracy of the second-best AI classification by 5.5% using the same external testing set.
The PEAN-I system, by imitating the expertise of pathologists, can additionally select regions of interest that can help other learning models more accurately diagnose tissue images. When three other learning models, CLAM, ABMIL, and TransMIL, were trained with tissue sample images generated by PEAN-I, the accuracy and AUC were increased significantly, with p-values of 0.0053 and 0.0161, respectively, as determined by paired t tests.
“PEAN is not merely a new deep learning-based diagnosis system but a pioneering paradigm with the potential to revolutionize the current state of intelligent medical research. It can extract and quantify human diagnostic expertise, thereby overcoming common drawbacks of mainstream models, such as high human resource consumption and low trust from physicians,”said Cui.
The research team acknowledges that they have developed only a fraction of PEAN’s potential for assisting healthcare providers with disease classification and lesion detection. In the future, the authors would like to apply PEAN to a range of downstream tasks, including personalized diagnosis, bionic humans and multimodal large predictive models.
“As for the ultimate goal, we aim to develop a unique ‘replica digital human’ for each experienced pathologist using PEAN and large language models, … facilitated by PEAN's two major advantages: low data collection costs and advanced conceptual design, enabling easy, large-scale multimodal data collection,” said Cui.
Tianhang Nan, Hao Quan, Bin Zheng, Xingyu Li, Mingchen Zou, Shuangdi Ning, Yue Zhao and Wei Qian from the College of Medicine and Biological Information Engineering at Northeastern University in Shenyang, China; Song Zheng, Yaoxing Guo, Hongduo Chen, Ruiqun Qi and Xinghua Gao from the Department of Dermatology at The First Hospital of China Medical University in Shenyang, China and the Key Laboratory of Immunodermatology at the Ministry of Education and National Health Commission in the National Joint Engineering Research Center for Theranostics of Immunological Skin Diseases in Shenyang, China; Siyuan Qiao from the College of Computer Science and Technology at Fudan University in Shanghai, China; Xin Gao from the Computer Science Program in the Computer, Electrical and Mathematical Sciences and Engineering Division, the Center of Excellence for Smart Health (KCSH) and the Center of Excellence on Generative AI at King Abdullah University of Science and Technology (KAUST) in King Abdullah, Kingdom of Saudi Arabia; Jun Niu from the Department of Dermatology at the General Hospital of Northern Theater Command in Shenyang, China; Chunfang Guo from the Department of Dermatology at Shenyang Seventh People’s Hospital in Shenyang, China; Yue Zhang from the Department of Dermatology at Shengjing Hospital of China Medical University in Shenyang, China; Xiaoqin Wang from the Center of Excellence on Generative AI at KAUST; Liping Zhao from Department of Dermatology at Zhongyi Northeast International Hospital in Shenyang, China; and Ze Wu from the Computer Science Program in the Computer, Electrical and Mathematical Sciences and Engineering Division at KAUST contributed to this research.
This study was supported by grants from the China Key Research and Development Program (grant no. 2023YFC2508200) and the Liaoning Province Medical Engineering Cross Joint Fund (grant no. 2022-YGJC-76).
Journal
Nature Communications
Article Title
Deep learning quantifies pathologists’ visual patterns for whole slide image diagnosis
No comments:
Post a Comment