New AI model segments crop diseases with just a few images
Alongside the model, a high-quality benchmark dataset covering 101 pest and disease classes has been publicly released. Together, they offer a powerful and label-efficient solution for real-world plant health monitoring.
Pests and diseases cause 20–40% annual global crop loss (FAO), posing a direct threat to food security. Traditional detection relies heavily on manual observation, which is labor-intensive, subjective, and slow for large planting areas. While deep learning has accelerated automated crop health diagnostics, most progress has focused on image-level recognition rather than pixel-wise segmentation. Semantic segmentation—labeling every pixel in an image—can locate diseased areas with precision, yet requires extensive annotation. Field images further complicate the task due to lighting variability, background interference, and subtle symptom differences. Current few-shot approaches have rarely been applied in agriculture and often fail when lesions are small, scattered, or visually similar to surrounding tissue. These challenges highlight the need for a robust segmentation method that operates with limited labeled data.
A study (DOI: 10.1016/j.plaphe.2025.100121) published in Plant Phenomics on 30 September 2025 by Xijian Fan’s team, Nanjing Forestry University, presents a label-efficient few-shot semantic segmentation framework that enables accurate pixel-level detection of plant pests and diseases in real-world agricultural environments with minimal annotated samples.
In this study, the authors rigorously evaluated the proposed SegPPD-FS framework for few-shot semantic segmentation of plant pests and diseases using the mean intersection over union (mIoU) as the primary metric and foreground–background IoU (FB-IoU) as a supplementary indicator. They first benchmarked nine state-of-the-art FSS models (HDMNet, MSANet, MIANet, SegGPT, BAM, PFENet, DCP, PerSAM, and MGCL) on the SegPPD-101 dataset and selected HDMNet, which achieved the best mIoU, as the baseline to be improved. SegPPD-FS was then built by integrating two key modules—the similarity feature enhancement module (SFEM) and the hierarchical prior knowledge injection module (HPKIM)—to refine query features at different stages. All models were implemented in PyTorch and trained on a single NVIDIA GeForce RTX 4060Ti GPU, using ResNet50 or VGG16 backbones with PSPNet as a fixed feature extractor and meta-learning for the remaining components. Training was performed with AdamW over 150 epochs on the SegPPD-101 dataset, where 80 categories were used for training and 21 disjoint categories for combined validation/testing to assess cross-crop generalization under 1-, 2-, 4-, and 8-shot settings. Results show that SegPPD-FS consistently outperforms HDMNet and other FSS methods in mIoU and FB-IoU, achieving gains of up to 1.00% mIoU and 0.69% FB-IoU with ResNet50, and demonstrating particularly strong performance on objects of varying scales, although performance on small or rare classes remains more challenging. Qualitative comparisons confirm closer alignment with ground truth masks, with SFEM enhancing foreground discrimination and HPKIM effectively handling varying infestation severity, lighting conditions, and high background similarity. Ablation studies reveal performance drops when either SFEM or HPKIM is removed and show that an attention-based distillation loss improves learning, whereas an auxiliary loss and KL divergence-based variant can be detrimental. Despite slightly lower speed (5.14 FPS) than some competitors, SegPPD-FS offers roughly 10 percentage points higher accuracy and converges in about 60 epochs, indicating both efficient optimization and stable adaptation.
This research advances precision agriculture by reducing the heavy dependence on manual annotation and expert involvement. With the ability to learn from a handful of samples, SegPPD-FS offers an efficient tool for early warning diagnostics, digital field scouting, yield risk forecasting, and automated phenotyping. Its robust outputs may support integration into smart farming platforms, UAV-based surveillance, IoT crop monitoring systems, and large-scale disease mapping.
###
References
DOI
Original Source URl
https://doi.org/10.1016/j.plaphe.2025.100121
Funding information
This work was funded by the Key R&D Program of Jiangsu Province (BE2023369and BE2023352) and the Huai'an Science and Technology Plan Project (HAB202373).
About Plant Phenomics
Plant Phenomics is dedicated to publishing novel research that will advance all aspects of plant phenotyping from the cell to the plant population levels using innovative combinations of sensor systems and data analytics. Plant Phenomics aims also to connect phenomics to other science domains, such as genomics, genetics, physiology, molecular biology, bioinformatics, statistics, mathematics, and computer sciences. Plant Phenomics should thus contribute to advance plant sciences and agriculture/forestry/horticulture by addressing key scientific challenges in the area of plant phenomics.
Journal
Plant Phenomics
Method of Research
Experimental study
Subject of Research
Not applicable
Article Title
SegPPD-FS: Segmenting plant pests and diseases in the wild using few-shot learning
Optical properties of plants reflect ozone-induced damage
A portable optical scanner non-invasively measures environmental-stress-induced changes in plants’ internal structures
Chiba University
image:
Researchers develop a novel optical coherence tomography scanner that can non-invasively measure changes in the optical properties of leaves exposed to environmental pollutants and reflect stress-induced internal damage
view moreCredit: Dr. Tatsuo Shiina, Chiba University, Japan https://doi.org/10.1038/s41598-025-22104-0
Escalating pollution and contamination of water and soil are emerging as serious threats to plant growth and its overall health. Plants are exposed to environmental pollutants for extended periods and exhibit changes in their color, texture, and internal structure in response. Among environmental pollutants, ozone concentrations are particularly high in urban and industrial regions and have been reported to inhibit plant growth and reduce crop yields. Conventional assessments rely on visual inspections, microscopic examinations, and remote sensing. However, these methods require invasive analysis and may not provide accurate quantitative measurements or facilitate long-term monitoring of internal changes.
To overcome these challenges, an international team of researchers developed a portable optical coherence tomography (OCT) scanner device that enables non-invasive, non-destructive, non-contact, and quantitative evaluation of internal plant structures. The pioneering work was conducted by Associate Professor Tatsuo Shiina and Dr. Hayate Goto from the Graduate School of Science and Engineering, Chiba University, Japan; Assistant Professor Jumar Cadondon from the University of the Philippines Visayas, Philippines; and Professor Maria Cecilia Galvez and Professor Edgar Vallar from De La Salle University Manila, Philippines.
Their work was published in Volume 15 of the journal Scientific Reports on October 31, 2025. Giving further insights, Dr. Shiina says, “By using OCT, the internal structure can be non-destructively quantified layer by layer to identify areas affected by the external environment. Since stress responses in plants appear first in the interior of the plant, OCT has the potential to elucidate environmental stresses that cause internal changes in plants.”
The researchers performed OCT measurements on the leaves of white clover (Trifolium repens), an indicator plant that is highly sensitive to environmental pollutants. At high concentrations, ozone enters leaves through their stomata (pores on the leaf surface) and destroys the palisade tissue (internal cell layer), thereby changing its optical properties. To quantify these optical changes, the researchers first exposed potted indicator plants to high ozone concentrations and monitored temporal changes in the same leaves over 14 days. Further, they measured changes in water lost by transpiration in leaves with cut stems to differentiate between changes caused by water/transpiration stress and ozone stress.
The experiments revealed that ozone exposure attenuated light scattering within the palisade layer, indicating structural disruption and damage to the cell walls and intercellular boundaries. The researchers also noted a gradual increase in palisade tissue thickness, consistent with the observed decrease in OCT signal intensity and ozone-induced structural damage.
Having established a baseline of ozone exposure, the researchers then sampled indicator plants from four different regions in the Chiba Prefecture, Japan, with varying ozone concentrations ranging from 0.04 to 0.16 ppm. They noted a similar trend in the OCT parameters of the sampled leaves, suggesting that the internal structural characteristics of leaves reflect the level of ozone exposure. Going further, on-site OCT measurements can rule out stress caused by stem cutting and transportation, providing a more accurate measure of the effects of ozone exposure.
Overall, these findings demonstrate the feasibility of OCT in the evaluation of environmental stress in plants, especially at the cellular level prior to the onset of symptoms. Moreover, OCT scanning offers a non-invasive, faster, and simpler alternative to conventional methods that require chemical fixation and staining, thus allowing the evaluation of the same living leaves over a longer period. Timely assessments with portable OCT scanning can improve disease monitoring and facilitate the early detection of deficiencies or stress-induced changes, thereby allowing early intervention to minimize losses. Additional studies can help validate its effectiveness in different environmental conditions like varying humidity, temperature, and light intensities.
Dr. Shiina concludes by saying, “Continued research in this direction could expand OCT’s utility in optimizing crop environments and improving agricultural productivity. The ability to estimate atmospheric and soil conditions on-site from a single OCT measurement provides a promising approach to advancing crop management and environmental monitoring.”
To see more news from Chiba University, click here.
***
Reference
Authors: Hayate Goto1, Jumar Cadondon2, Maria Cecilia Galvez3, Edgar Vallar3, and Tatsuo Shiina1
Affiliations: 1Graduate School of Science and Engineering, Chiba University
2Division of Physical Sciences and Mathematics, College of Arts and Sciences, University of the Philippines Visayas
3Environment and Remote Sensing Research (EARTH) Laboratory, Physics Department, College of Science, De La Salle University Manila
DOI: 10.1038/s41598-025-22104-0
About Associate Professor Tatsuo Shiina from Chiba University, Japan
Dr. Tatsuo Shiina is an Associate Professor at the Graduate School of Science and Engineering, Chiba University, Japan. His research interests include photoelectric measurement, scattering optics, and atmospheric physics. His work focuses on developing a short-range mini-lidar for monitoring the lower atmosphere and gases, inventing a portable OCT scanner for industrial applications, enabling internal measurements of living organisms, plants, and industrial materials, and enhancing the efficiency and internal sensing of laser light in highly scattering materials.
Funding
This research was funded by Japan Science and Technology Agency (JST), under the program for the establishment of university fellowships towards the creation of science technology innovation, Grant Number JPMJFS2107.
Journal
Scientific Reports
Method of Research
Experimental study
Subject of Research
Not applicable
Article Title
OCT analysis of white clover leaves affected by regional ozone stress
New AI-powered 3D tool enables
fast,label-free phenotyping in rice and
wheat
By combining radiance-field reconstruction with SAM2 segmentation, IPENS enables users to obtain precise organ-level geometry from ordinary multi-view images using only a few prompts. In rice and wheat, the system accurately measures voxel volume, leaf surface area, and leaf dimensions, operating non-destructively and at high speed.
Plant phenotyping technologies underpin the development of genotype-phenotype association models and guide trait improvement in modern breeding. Traditional 2D imaging methods struggle to capture complex plant structures, while field phenotyping often requires manual sampling and destructive testing. Recent advances in 3D reconstruction—including Neural Radiance Fields (NeRF) and 3D Gaussian Splatting—have demonstrated strong potential for non-invasive trait evaluation, but most models require large annotated datasets, perform poorly on occluded organs like rice grains, or demand repetitive user interaction per target. Unsupervised approaches lack precision at grain-scale resolution, and multi-target segmentation remains inefficient.
A study (DOI: 10.1016/j.plaphe.2025.100106) published in Plant Phenomics on 15 September 2025 by Youqiang Sun’s team, Chinese Academy of Sciences, provides researchers and breeders with rapid, reliable phenotypic data to accelerate intelligent breeding and improve crop productivity.
To evaluate the performance of IPENS, the researchers first designed a quantitative segmentation experiment using MMR (rice) and MMW (wheat) datasets, where 30% of the data served as a validation set and the remaining portion was used for comparative algorithm training. The segmentation task was conducted by manually placing two positive and two negative prompts on both the first and rear video frames, allowing the model to perform unsupervised 3D instance segmentation based on prompt guidance. Segmentation quality was assessed using IoU, precision, recall, and F1 score, and results were compared with existing mainstream algorithms including the unsupervised CrossPoint, the supervised interactive Agile3D, and the fully supervised state-of-the-art oneformer3D. A time-performance evaluation was also conducted by measuring segmentation time for single- and multi-target scenarios, benchmarking IPENS against SA3D and analyzing how efficiency scales with target quantity. Beyond segmentation, phenotypic accuracy was verified through voxel volume estimation and leaf-trait measurement, examining how multi-stage point cloud processing (convex hull → mesh → mesh subdivision) influences error and model stability. Results show that IPENS achieved IoU scores of 61.48%, 69.54%, and 60.13% for rice grain, leaf, and stem, and 92.82%, 86.47%, and 89.76% for wheat panicle, leaf, and stem, respectively. Its mean IoU surpassed the unsupervised CrossPoint (Rice 23.41% / Wheat 16.50%) and exceeded Agile3D’s first-interaction performance, demonstrating competitive accuracy without labeled data. Time analysis revealed ~3.3× acceleration compared with SA3D, with single-organ segmentation taking ~70 seconds and multi-organ inference scaling linearly with target number. Trait estimation further confirmed model reliability, with rice grain voxel volume reaching R²=0.7697 (RMSE 0.0025) and wheat panicle voxel volume R²=0.9956. Leaf area accuracy improved progressively after subdivision (rice R²=0.84; wheat R²=1.00), and leaf length/width estimation maintained millimeter-level errors (rice R²=0.97/0.87; wheat R²=0.99/0.92), validating that higher segmentation quality directly supports stable phenotypic prediction.
IPENS offers a scalable, non-invasive and label-free tool for field and greenhouse phenotyping. By rapidly generating accurate 3D trait data, it provides breeding programs with efficient support for yield-related evaluations, genomic selection, and organ-level trait screening. The method improves throughput for grain counting, biomass measurement, and plant architecture assessment while reducing reliance on expert annotators. With strong cross-species generalization demonstrated in rice, wheat, and other crops, IPENS has potential for integration into automated phenotyping chambers, robotic imaging platforms, and future smart-agriculture pipelines. Its capacity to link phenotype data to genomic models may significantly accelerate trait improvement and breeding decision-making.
###
References
DOI
Original Source URl
https://doi.org/10.1016/j.plaphe.2025.100106
Funding information
This research was supported by the National Key Research and Development Program of China (Grant Number 2023YFD1901003) and the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant XDA28120402).
About Plant Phenomics
Plant Phenomics is dedicated to publishing novel research that will advance all aspects of plant phenotyping from the cell to the plant population levels using innovative combinations of sensor systems and data analytics. Plant Phenomics aims also to connect phenomics to other science domains, such as genomics, genetics, physiology, molecular biology, bioinformatics, statistics, mathematics, and computer sciences. Plant Phenomics should thus contribute to advance plant sciences and agriculture/forestry/horticulture by addressing key scientific challenges in the area of plant phenomics.
Journal
Plant Phenomics
Method of Research
Experimental study
Subject of Research
Not applicable
Article Title
IPENS: Interactive unsupervised framework for rapid plant phenotyping extraction via NeRF-SAM2 fusion
No comments:
Post a Comment