Creating luminescent biomaterials from wood
Merging molecular biology and photochemistry for breakthrough innovation
image:
Using genetic engineering, we successfully introduced a new chromophore structure, scopoletin, into lignin—a key component of plant cell walls. The resulting lignin exhibits strong, stable luminescence, pH responsiveness, and reversible photo-dimerization, offering unique photochemical functionalities with potential applications in environmental sensing and smart materials.
view moreCredit: Masatsugu Takada(Ehime University)
Lignin is one of the most abundant aromatic polymers on Earth and has long been recognized as a promising biomass resource. However, due to its complex and heterogenous structure and resistance to degradation, its utilization has largely been limited to combustion for energy. To unlock its full potential, our research focused on the optical properties of lignin, aiming to control its luminescence intensity and emission wavelength by manipulating the local environment around chromophores.
Specifically, we genetically engineered poplar trees to overexpress the enzyme Feruloyl-CoA 6’-hydroxylase (F6’H1), which converts feruloyl-CoA—an intermediate in lignin biosynthesis—into scopoletin, a coumarin derivative with excellent luminescent properties. The resulting lignin incorporated scopoletin structures, leading to a red-shift in emission wavelength into the visible range and suppression of fluorescence quenching.
The engineered lignin maintained clear luminescence even in low-polarity solvents, indicating uniform distribution of scopoletin within the lignin molecule. Furthermore, the luminescence was preserved when the lignin was embedded in polymer matrices, and its intensity varied depending on the solvent and polymer interactions, highlighting the importance of material design.
Additionally, the lignin exhibited pH-responsive fluorescence, with intensity increasing under alkaline conditions and decreasing under acidic conditions. Reversible photo-dimerization of scopoletin was also observed upon UV irradiation, endowing the lignin with light-responsive properties for the first time. These features suggest potential applications in stimuli-responsive materials, such as shape-memory polymers, photo-switchable gels, fluorescent tags, and 3D printing materials.
This pioneering study demonstrates the feasibility of transforming underutilized biomass into high-performance optical materials through molecular design and genetic engineering. It represents a significant step toward the development of environmentally friendly, sustainable photo-functional materials and offers promising prospects for future innovations in materials science, environmental technology, and biotechnology.
Journal
Plant Biotechnology Journal
Smart weeding with less data: New AI model learns to spot rare weeds in record time
Nanjing Agricultural University The Academy of Science
The model, called the Few-Shot Enhanced Attention (FSEA) network, incorporates plant-specific features—such as color cues and morphology—into its learning process. By integrating domain knowledge with advanced attention mechanisms, FSEA enables rapid and accurate adaptation to unfamiliar weeds in diverse field environments.
Weeds severely reduce crop yield and quality, and excessive herbicide use threatens both ecosystems and human health. Deep learning has revolutionized plant detection, but its data-driven nature demands vast, balanced datasets that are nearly impossible to obtain under field conditions. Agricultural images often feature occlusions, variable lighting, and an uneven distribution of weed species, limiting the generalizability of current models. FSOD offers a potential solution by enabling fast model adaptation from limited data. However, existing FSOD models lack domain-specific optimization for agricultural conditions, particularly when weeds overlap or vary greatly in morphology. Based on these challenges, researchers developed the FSEA network for efficient few-shot weed detection.
A study (DOI: 10.1016/j.plaphe.2025.100086) published in Plant Phenomics on 5 July 2025 by Jingyao Gai’s team, Guangxi University, reduces the need for large, time-consuming datasets and paves the way for intelligent, eco-friendly weed management systems suitable for precision agriculture and sustainable crop production.
In this study, the Few-Shot Enhanced Attention (FSEA) network was evaluated against six state-of-the-art few-shot detectors (TFA, FSCE, Meta R-CNN, Meta-DETR, DCFS, and DiGEO) and a traditional detector (YOLOv7) to assess its adaptability to new weed species under limited training data. After 40 epochs of fine-tuning, FSEA demonstrated superior performance, achieving an all-class mean average precision (mAP) of 0.416 and a novel-class mAP of 0.346, outperforming all baseline methods. In contrast, fine-tuning-based models such as TFA and FSCE exhibited poor adaptation and severe base–novel trade-offs, while meta-learning-based approaches (Meta R-CNN, Meta-DETR) achieved more balanced but lower overall accuracy. Feature enhancement-based models (DCFS and DiGEO) improved feature discrimination but struggled with occlusion and small-object detection. The general-purpose YOLOv7 model overfitted under few-shot conditions, confirming its limited suitability for this task. Ablation experiments further validated each FSEA module: the feature fusion module increased base and novel mAP by 0.081 and 0.061, respectively, by focusing attention on green vegetation features; the feature enhancement module raised mAP by 0.105 and 0.044 by better capturing plant morphology; and the repulsion loss improved occlusion handling by an additional 0.024 and 0.014. Qualitative analyses confirmed that FSEA maintained robust detection across plant sizes and occlusion levels while achieving real-time inference at 32 frames per second. These results demonstrate that by integrating vegetation-specific color, morphology, and occlusion priors, FSEA effectively overcomes the data scarcity challenges in agricultural weed detection, ensuring both accuracy and efficiency in practical field environments.
This research offers a powerful solution for modern precision agriculture, enabling weeding robots and vision-based monitoring systems to adapt quickly to new environments without extensive retraining. The FSEA model’s integration of plant-specific priors reduces data requirements and enhances accuracy under real-world field conditions. Beyond weed detection, its methodology can extend to rare plant identification, pest monitoring, and early crop disease diagnosis. The open-source release of its dataset and code provides a foundation for further research in agricultural artificial intelligence. By promoting efficient, selective weed control, FSEA supports sustainable farming practices and minimizes reliance on chemical herbicides—advancing both agricultural productivity and environmental conservation.
###
References
DOI
Original URL
https://doi.org/10.1016/j.plaphe.2025.100086
Funding information
This project was funded by National Natural Science Foundation of China, China (Award No.: U23A20330) and Specific Research Project of Guangxi for Research Bases and Talents, China (Award No.: AD22035919). This research is also a product of the Modern Industry School of Subtropical Intelligent Agricultural Machinery and Equipment, Guangxi University, China (Project No. T3010097930).
About Plant Phenomics
Plant Phenomics is dedicated to publishing novel research that will advance all aspects of plant phenotyping from the cell to the plant population levels using innovative combinations of sensor systems and data analytics. Plant Phenomics aims also to connect phenomics to other science domains, such as genomics, genetics, physiology, molecular biology, bioinformatics, statistics, mathematics, and computer sciences. Plant Phenomics should thus contribute to advance plant sciences and agriculture/forestry/horticulture by addressing key scientific challenges in the area of plant phenomics.
Journal
Plant Phenomics
Method of Research
Experimental study
Subject of Research
Not applicable
Article Title
FSEA: Incorporating domain-specific prior knowledge for few-shot weed detection
Next-generation phenotyping robot brings AI-driven insight to crop growth and stress response
Nanjing Agricultural University The Academy of Science
Equipped with RGB, hyperspectral, and depth sensors, the robot can autonomously navigate crop fields, capturing and analyzing data with exceptional accuracy. PhenoRob-F achieved impressive results in detecting wheat ears, segmenting rice panicles, reconstructing 3D plant structures, and classifying drought severity in rice with over 99% accuracy.
To meet the global challenge of increasing food production under climate change, plant breeders require reliable phenotypic data linking genes to observable traits such as growth, yield, and stress tolerance. Traditional manual measurements are labor-intensive and prone to error, while controlled-environment phenotyping systems fail to capture field variability. Aerial systems such as drones offer speed but lack payload and resolution, and fixed gantry systems are expensive and immobile. Autonomous mobile robots bridge these gaps with their flexible mobility, high-resolution imaging, and minimal soil disturbance. However, existing robots have struggled to balance precision, stability, and scalability under field conditions. Based on these challenges, researchers designed PhenoRob-F to deliver robust, high-throughput phenotyping across multiple crops and environments.
A study (DOI: 10.1016/j.plaphe.2025.100085) published in Plant Phenomics on 13 August 2025 by Peng Song’s team, Huazhong Agricultural University, provides a powerful tool for plant breeders and agricultural researchers, enabling high-throughput, precise, and automated data acquisition that accelerates genetic discovery and crop improvement under real-world field conditions.
To evaluate the performance of PhenoRob-F under real-world conditions, the research team conducted three field experiments using multiple sensing and modeling techniques. The first experiment focused on RGB image acquisition for wheat and rice during the heading stage, where top-view canopy images were captured and analyzed using the YOLOv8m and SegFormer_B0 deep learning models. These enabled accurate detection of wheat ears and segmentation of rice panicles for yield estimation. The robot achieved a precision of 0.783, a recall of 0.822, and a mean average precision (mAP) of 0.853 for wheat, while rice panicle segmentation reached a mean intersection over union (mIoU) of 0.949 and an accuracy of 0.987, demonstrating robust visual performance. The second experiment employed an RGB-D depth camera to reconstruct the 3D structures of maize and rapeseed plants across growth stages. Using the scale-invariant feature transform (SIFT) and iterative closest point (ICP) algorithms, the robot generated high-fidelity point clouds for estimating plant height, achieving strong correlations with manual measurements (R² = 0.99 for maize and 0.97 for rapeseed). The third experiment applied hyperspectral imaging to rice under drought stress, collecting spectral data in the 900–1700 nm range to classify drought severity. After feature extraction and reduction via the CARS algorithm, a random forest model achieved classification accuracies ranging from 97.7% to 99.6% across five drought levels. Operationally, PhenoRob-F demonstrated high efficiency, completing phenotyping rounds in 2–2.5 hours and processing up to 1875 potted plants per hour. These experiments collectively confirmed the robot’s capability to autonomously collect multimodal data, integrate spectral and 3D imaging, and deliver high-precision phenotypic trait analysis across diverse crop species.
PhenoRob-F offers a practical, cost-effective solution for field-based phenotyping, providing researchers and breeders with an automated means to evaluate crop performance across diverse conditions. The system can assist in yield prediction, stress monitoring, and genetic screening, ultimately supporting the development of climate-resilient and high-yield crop varieties. Beyond breeding, its hyperspectral and 3D imaging capabilities could be extended to monitor soil health, nutrient management, and pest detection. By significantly reducing the labor and time required for data collection, PhenoRob-F accelerates the transition from genomic data to field application—bridging a critical gap in modern agriculture’s digital transformation.
###
References
DOI
Original URL
https://doi.org/10.1016/j.plaphe.2025.100085
Funding information
This work was supported by the National Key Research and Development Program of China (2021YFD1200504, 2022YFD2002304), the National Natural Science Foundation of China (32471992), the Key Core Technology Project in Agriculture of Hubei Province (HBNYHXGG2023-9), and the Supporting Project for High-Quality Development of the Seed Industry of Hubei Province (HBZY2023B001-06).
About Plant Phenomics
Plant Phenomics is dedicated to publishing novel research that will advance all aspects of plant phenotyping from the cell to the plant population levels using innovative combinations of sensor systems and data analytics. Plant Phenomics aims also to connect phenomics to other science domains, such as genomics, genetics, physiology, molecular biology, bioinformatics, statistics, mathematics, and computer sciences. Plant Phenomics should thus contribute to advance plant sciences and agriculture/forestry/horticulture by addressing key scientific challenges in the area of plant phenomics.
Journal
Plant Phenomics
Method of Research
Experimental study
Subject of Research
Not applicable
Article Title
PhenoRob-F: An autonomous ground-based robot for high-throughput phenotyping of field crops
AI-Generated 3D leaf models advance precision plant phenotyping
Nanjing Agricultural University The Academy of Science
By creating synthetic “leaf point clouds,” the method dramatically reduces the need for manual measurements and enables more accurate, scalable trait estimation in plant phenotyping.
In recent years, 3D plant phenotyping has emerged as a promising field for understanding crop structure and productivity. However, accurately estimating leaf traits remains challenging because obtaining ground-truth data requires time-consuming manual work by experts. Traditional image-based methods capture only 2D features and struggle to represent leaf curvature and geometry, while 3D approaches are constrained by limited labeled data for training. As a result, most algorithms either rely on rule-based models or generate synthetic data that lack real-world realism. These challenges highlight the need for a scalable and automated solution to produce high-quality, labeled 3D data for plant trait estimation.
A study (DOI: 10.1016/j.plaphe.2025.100071) published in Plant Phenomics on 16 June 2025 by Gianmarco Roggiolani ’s team, University of Bonn, introduces a generative model capable of producing lifelike 3D leaf point clouds with known geometric traits, accelerating crop improvement and optimize yield predictions through data-driven modeling.
The research team trained a 3D convolutional neural network to learn how to generate realistic leaf structures from skeletonized representations of real leaves. Using datasets from sugar beet, maize, and tomato plants, they extracted the “skeleton” of each leaf—the petiole and main and lateral axes that define its shape—and then expanded these skeletons into dense point clouds using a Gaussian mixture model. The neural network, designed as a 3D U-Net architecture, predicts per-point offsets to reconstruct the complete leaf shape while maintaining its structural traits. A combination of reconstruction and distribution-based loss functions ensures that the generated leaves match the geometric and statistical properties of real-world data. To validate the method, the researchers compared their synthetic dataset against existing generative approaches and real agricultural data using metrics such as the Fréchet Inception Distance (FID), CLIP Maximum Mean Discrepancy (CMMD), and precision–recall F-scores. The generated leaves showed high similarity to real ones, outperforming alternative datasets produced by agricultural simulation software or diffusion models. Importantly, when the synthetic data were used to fine-tune existing leaf trait estimation algorithms, such as polynomial fitting and principal component analysis-based models, the accuracy and precision of trait prediction improved substantially. Tests conducted on the BonnBeetClouds3D and Pheno4D datasets confirmed that models trained with the new synthetic data estimated real leaf length and width more accurately and with lower error variance. The researchers also demonstrated that their approach could generate diverse leaf shapes conditioned on user-defined traits, allowing for robust benchmarking and model development without costly manual labeling.
This study represents a significant step toward automating 3D plant phenotyping and reducing the bottleneck caused by limited labeled data. By enabling realistic data generation based on real plant structures, the method provides a foundation for building, testing, and improving trait estimation algorithms in agriculture. Future work will expand this approach to handle more complex morphologies, such as compound leaves, and integrate it with plant growth models to simulate phenotypic changes across development stages. The team also envisions the creation of open-access libraries of synthetic yet biologically accurate plant datasets to support research in sustainable agriculture, robotic phenotyping, and crop improvement under climate challenges.
###
References
DOI
Original URL
https://doi.org/10.1016/j.plaphe.2025.100071
Funding information
This work has partially been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy, EXC-2070 – 390732324 – PhenoRob.
About Plant Phenomics
Plant Phenomics is dedicated to publishing novel research that will advance all aspects of plant phenotyping from the cell to the plant population levels using innovative combinations of sensor systems and data analytics. Plant Phenomics aims also to connect phenomics to other science domains, such as genomics, genetics, physiology, molecular biology, bioinformatics, statistics, mathematics, and computer sciences. Plant Phenomics should thus contribute to advance plant sciences and agriculture/forestry/horticulture by addressing key scientific challenges in the area of plant phenomics.
Journal
Plant Phenomics
Method of Research
Experimental study
Subject of Research
Not applicable
Article Title
Generation of labeled leaf point clouds for plants trait estimation
Smart vision predicts wheat flowering days in advance using AI and weather data
Nanjing Agricultural University The Academy of Science
By integrating RGB images with meteorological data and applying few-shot learning techniques, the system achieves an F1 score above 0.8 across different planting environments.
Wheat (Triticum aestivum) is a cornerstone of global food security, and predicting its phenological stages, especially anthesis, is critical for optimizing breeding strategies and improving yields. Conventional anthesis prediction models rely on genetic markers or environmental variables such as temperature and photoperiod, successfully estimating flowering dates at the field scale. However, these models fail to capture the micro-environmental variations influencing individual plants. For breeders, timely prediction—typically 8–10 days in advance—is essential for hybrid pollination. Moreover, regulatory agencies in the United States and Australia mandate accurate anthesis reporting 7–14 days before flowering in biotechnology trials. Current manual monitoring is costly, inefficient, and prone to human error. Based on these challenges, developing an automated, adaptable, and accurate method for predicting individual plant flowering became imperative.
A study (DOI: 10.1016/j.plaphe.2025.100091) published in Plant Phenomics on 21 July 2025 by Yiting Xie’s & Huajian Liu’s team, University of Adelaide, offers a cost-effective, scalable, and precise tool for wheat breeders and regulatory bodies, transforming the traditionally labor-intensive task of tracking flowering into a smart and automated process.
This study developed a multimodal machine vision framework that integrates RGB imagery and on-site meteorological data to predict the anthesis of individual wheat plants. The model reformulates flowering prediction into binary or three-class classification problems, determining whether a plant will flower before, after, or within one day of a critical date. To improve adaptability and minimize data demands, few-shot learning based on metric similarity was introduced, enabling models trained on one dataset to generalize effectively to new environments. The research employed advanced architectures, Swin V2 and ConvNeXt, each paired with fully connected (FC) or transformer (TF) comparators. A multi-step evaluation process—including statistical profiling, cross-dataset validation, few-shot inference, ablation on weather integration, and anchor-transfer tests—demonstrated both model robustness and environmental sensitivity. Statistical analysis revealed clear climatic impacts on flowering duration, ranging from 18.4 days in early sowing to 11.6 days in late sowing, with ANOVA (P ≤ 0.001) confirming significant differences across conditions. Cross-dataset validation achieved F1 scores above 0.85 on training datasets and around 0.80 across independent datasets, indicating strong generalization. Few-shot inference improved accuracy further: one-shot models achieved F1 = 0.984 at 8 days before anthesis, while five-shot training raised weaker results (e.g., 0.75 → 0.889). Integrating weather data boosted accuracy by 0.06–0.13 F1 units, particularly 12–16 days before anthesis when image cues were weak. Anchor-transfer experiments verified model deployability, as Late-derived anchors yielded comparable performance (F1 ≈ 0.76) at new field sites, demonstrating that environmental alignment was more critical than dataset size. Even under the more complex three-class prediction, models retained F1 > 0.6, confirming the framework’s robustness and practical potential for high-precision flowering prediction in wheat breeding.
This multimodal AI system provides breeders with a reliable decision-support tool to plan hybridization and manage pollination windows more efficiently. For genetically modified (GM) crop trials, it can ensure compliance with regulatory frameworks by forecasting flowering in advance, thereby reducing costs and manual inspection frequency. By merging visual phenotyping with weather analysis, this method bridges the gap between static imaging and dynamic environmental modeling, marking a significant step toward intelligent, automated phenology prediction in precision agriculture.
###
References
DOI
Original URL
https://doi.org/10.1016/j.plaphe.2025.100091
Funding information
This work was supported by the ARC Training Centre for Accelerated Future Crops Development IC210100047), South Australian Research and Development Institute and the University of Adelaide Research Scholarships.
About Plant Phenomics
Plant Phenomics is dedicated to publishing novel research that will advance all aspects of plant phenotyping from the cell to the plant population levels using innovative combinations of sensor systems and data analytics. Plant Phenomics aims also to connect phenomics to other science domains, such as genomics, genetics, physiology, molecular biology, bioinformatics, statistics, mathematics, and computer sciences. Plant Phenomics should thus contribute to advance plant sciences and agriculture/forestry/horticulture by addressing key scientific challenges in the area of plant phenomics.
Journal
Plant Phenomics
Method of Research
Experimental study
Subject of Research
Not applicable
Article Title
Multi-modal few-shot learning for anthesis prediction of individual wheat plants
Deep learning reveals the 3D secrets of fruit tissue microstructure
Nanjing Agricultural University The Academy of Science
Applied to apple and pear fruit, the model achieved very high accuracy, outperforming previous 2D approaches and traditional algorithms.
The 3D structure of plant tissues underlies vital metabolic processes, yet traditional microscopy methods demand extensive sample preparation and offer only small fields of view. X-ray micro-CT has recently enabled non-destructive 3D imaging of plant samples, but quantifying tissue morphology remains complex due to overlapping features and low image contrast. Existing segmentation techniques often fail to separate parenchyma cells, vascular tissues, or stone cell clusters. Recent advances in deep learning have transformed image analysis in medicine and biology, suggesting new opportunities for plant research. Due to these challenges, a deep learning–based approach is needed to achieve accurate, automated 3D segmentation of plant tissues from native X-ray micro-CT images.
A study (DOI: 10.1016/j.plaphe.2025.100087) published in Plant Phenomics on 5 July 2025 by Pieter Verboven’s team, KU Leuven, provides the first fully automated framework for labeling and quantifying plant tissue architecture, paving the way for faster and more precise studies of plant physiology and storage behavior.
The research employed a 3D panoptic segmentation framework built upon the 3D extension of Cellpose and a 3D Residual U-Net to achieve complete labeling of fruit tissue microstructure from X-ray micro-CT images. The model simultaneously performed instance segmentation—predicting intermediate gradient fields in X, Y, and Z to separate individual parenchyma cells—and semantic segmentation to classify voxels into cell matrix, pore space, vasculature, or stone cell clusters. It was trained on apple and pear datasets with synthetic data augmentation involving morphological dilation and erosion, grey-value assignment, and Gaussian noise addition, and benchmarked against a 2D instance segmentation model and a marker-based watershed algorithm. Evaluation using Aggregated Jaccard Index (AJI) and Dice Similarity Coefficient (DSC) showed that the 3D model outperformed all previous approaches, reaching AJIs of 0.889 for apple and 0.773 for pear, compared with 0.861/0.732 for the 2D model and 0.715/0.631 for the benchmark. The model segmented pore spaces and cell matrices almost perfectly and successfully identified vasculature (DSC 0.506 in apple; 0.789 in pear) and stone cell clusters (IoU 0.683; DSC 0.810; precision 0.798; recall 0.836). Visual validation confirmed accurate detection of vascular bundles in ‘Kizuri’ and ‘Braeburn’ apples and smooth, realistic segmentation of stone cell clusters in ‘Celina’ and ‘Fred’ pears (DSC up to 0.90). However, additional data augmentation and targeted subsets did not enhance performance, likely due to dataset imbalance and domain shifts. Morphometric analysis further validated model accuracy, with vasculature widths ranging 70–780 μm and stone cell clusters showing variable dimensions and sphericity (0.68–0.74). Overall, the 3D deep learning model provided the most complete, automated, and contrast-free approach for quantifying plant tissue microstructure to date.
This 3D deep learning–based model provides plant scientists with a powerful, non-destructive tool for studying how microscopic structures influence water, gas, and nutrient transport. It can drastically accelerate “human-in-the-loop” analysis, reducing manual labor while improving accuracy in tissue characterization. In fruit research, the model helps reveal how cellular arrangements determine texture, storability, and susceptibility to physiological disorders such as browning or watercore. More broadly, the technology offers a scalable framework for studying tissue development, ripening, and stress responses across diverse crops. Its compatibility with standard X-ray micro-CT instruments makes it an accessible solution for integrating artificial intelligence into plant anatomy and food science research.
###
References
DOI
Original URL
https://doi.org/10.1016/j.plaphe.2025.100087
Funding information
This research was funded by the Research Foundation – Flanders (FWO, grant number S003421N, SBO project FoodPhase) and KU Leuven (project C1 C14/22/076).
About Plant Phenomics
Plant Phenomics is dedicated to publishing novel research that will advance all aspects of plant phenotyping from the cell to the plant population levels using innovative combinations of sensor systems and data analytics. Plant Phenomics aims also to connect phenomics to other science domains, such as genomics, genetics, physiology, molecular biology, bioinformatics, statistics, mathematics, and computer sciences. Plant Phenomics should thus contribute to advance plant sciences and agriculture/forestry/horticulture by addressing key scientific challenges in the area of plant phenomics.
Journal
Plant Phenomics
Method of Research
Experimental study
Subject of Research
Not applicable
Article Title
Panoptic segmentation for complete labeling of fruit microstructure in 3D micro-CT images with deep learning
CitrusGAN: AI revolutionizes 3D fruit phenotyping with sparse X-ray imaging
Nanjing Agricultural University The Academy of Science
Unlike traditional computed tomography (CT) scanning that requires expensive machines and time-consuming data acquisition, CitrusGAN produces high-quality, high-resolution models in seconds using only six X-ray views. The generated 3D models capture both external and internal fruit structures, enabling accurate measurement of traits such as peel thickness, edible rate, and number of segments.
Citrus is one of the world’s most widely cultivated crops, producing over 150 million tons annually. To improve fruit quality, flavor, and stress resilience, breeders rely on phenotypic analysis—the study of visible and structural traits linked to genetic and environmental factors. However, manual phenotyping is slow and error-prone, and most existing image-based or LiDAR methods can only capture surface features, not internal traits. Conventional CT imaging can reveal interior structures but remains costly and inefficient. Based on these challenges, researchers sought to create a rapid, affordable, and non-destructive 3D reconstruction technique to visualize and analyze both the internal and external morphology of fruit with unprecedented efficiency.
A study (DOI: 10.1016/j.plaphe.2025.100082) published in Plant Phenomics on 26 June 2025 by Yaohui Chen’s team, Huazhong Agricultural University, marks a significant leap toward non-destructive, high-throughput phenotyping, potentially transforming fruit breeding, quality control, and agricultural automation.
In this study, the researchers employed a generative deep learning method to reconstruct three-dimensional (3D) citrus computed tomography (CT) models from sparse-view X-ray images, testing reconstruction quality across different input configurations. They compared models trained on 2, 4, and 6 input views to evaluate the effect of view number on output fidelity. The results revealed that all reconstructed models achieved structural similarity index (SSIM) values above 0.9, indicating high resemblance to real CT data, while the 6-view model reached the highest performance with a 1.2 dB increase in peak signal-to-noise ratio (PSNR) and an SSIM of 0.92. Qualitative analyses showed that the 6-view model effectively reproduced external contours and internal pulp structures with greater clarity, preserving detailed segment boundaries and shape completeness that closely matched the real CT slices. Minor discrepancies were mainly observed in low-density tissues, such as granulated pulp or inner peel layers, which appeared darker and less distinct due to their reduced water content. For phenotypic validation, 77 fruit samples were analyzed to extract eight structural traits—volume, surface area, height, width, length, peel thickness, edible rate, and segment number—from both real and generated CT models. A strong correlation (R² > 0.95) was observed for most parameters, confirming the method’s high quantitative reliability, although peel thickness and edible rate showed greater variation due to subtle boundary blurring. Overall, the findings demonstrate that the deep learning model can efficiently and accurately reconstruct both external and internal fruit morphology from minimal X-ray data, achieving high-throughput precision phenotyping suitable for practical breeding and quality evaluation.
By drastically reducing imaging costs and computation time, CitrusGAN represents a transformative step in agricultural automation. It enables breeders to evaluate hundreds of fruit samples non-destructively, accelerating the selection of desirable genotypes and improving yield and quality traits. Beyond breeding, the technology could be deployed in automated sorting and grading systems, ensuring consistency in commercial fruit production. The method’s ability to visualize internal features also opens new possibilities for detecting hidden defects and monitoring fruit ripeness, providing real-time insights for both research and industry.
###
References
DOI
Original URL
https://doi.org/10.1016/j.plaphe.2025.100082
Funding information
This work is supported by the National Science Foundation of China (32302206) and the Fundamental Research Funds for the Central Universities, China (2662024SZ002).
About Plant Phenomics
Plant Phenomics is dedicated to publishing novel research that will advance all aspects of plant phenotyping from the cell to the plant population levels using innovative combinations of sensor systems and data analytics. Plant Phenomics aims also to connect phenomics to other science domains, such as genomics, genetics, physiology, molecular biology, bioinformatics, statistics, mathematics, and computer sciences. Plant Phenomics should thus contribute to advance plant sciences and agriculture/forestry/horticulture by addressing key scientific challenges in the area of plant phenomics.
Journal
Plant Phenomics
Method of Research
Experimental study
Subject of Research
Not applicable
Article Title
CitrusGAN: sparse-view X-ray CT reconstruction for citrus based on generative adversarial networks
Spectral signatures reveal hidden pine defenses: New tech enhances fusiform rust Resistance screening
The researchers demonstrated that NIR spectroscopy, when paired with advanced chemometric modeling, can classify resistant and susceptible trees with up to 69% accuracy—even before symptoms appear.
Loblolly pine is the most widely planted timber species in the U.S., producing nearly 60% of the nation’s wood supply. However, its productivity is continually threatened by Cronartium quercuum f. sp. fusiforme, the fungus responsible for fusiform rust. This pathogen alternates between oaks and pines in its complex life cycle, infecting young trees and forming galls that deform stems, reduce wood quality, and often lead to mortality. While deploying genetically resistant families has curbed the disease, visual phenotyping remains subjective. Subtle symptoms may be missed, and environmental variability often obscures true resistance. These limitations underscore the need for objective, field-deployable tools to evaluate disease resistance in tree breeding programs. Based on these challenges, researchers explored whether vibrational spectroscopy could non-destructively identify resistance traits in asymptomatic trees.
A study (DOI: 10.1016/j.plaphe.2025.100066) published in Plant Phenomics on 6 June 2025 by Simone Lim-Hing’s team, University of Georgia, provides tree breeders with a powerful, non-destructive tool for improving disease resistance screening and advancing precision forestry.
In this study, researchers applied two vibrational spectroscopy-based methods—near-infrared (NIR) and Fourier-transformed mid-infrared (FT-IR) spectroscopy—to evaluate loblolly pine (Pinus taeda L.) resistance to fusiform rust. Phloem and needle samples were collected from 34 pine families across eight test sites in Alabama, Florida, and Georgia, and analyzed using a handheld NIR spectrometer and a benchtop FT-IR device. Chemometric and machine learning approaches, including support vector machines (SVM) and sparse partial least squares discriminant analysis (sPLS-DA), were used to classify trees as resistant or susceptible based on their spectral profiles. The NIR analysis involved 275 samples, while the FT-IR analysis included 234 phloem samples after processing losses. Non-metric multidimensional scaling (NMDS) and PERMANOVA tests revealed that site effects strongly influenced spectral variation, but resistance classes were not clearly separated. Despite this, models built from NIR spectra achieved higher predictive accuracy than those based on FT-IR data. The best-performing NIR model, using data from the 30 most resistant and 30 most susceptible trees, achieved 81.5% training accuracy and 68.7% testing accuracy, whereas FT-IR models reached up to 65% testing accuracy. Both modeling approaches showed reduced performance when intermediate phenotypes were included. Phloem tissue consistently provided better discrimination than needle tissue, highlighting its closer link to disease defense mechanisms. Several recurring spectral bands—5678, 5800, 5814, 5827, 5841, and 6222 cm⁻¹—were identified as key indicators of resistance-associated chemistry. Overall, the study demonstrates that NIR spectroscopy offers a reliable, non-destructive, and field-deployable tool for early detection of disease resistance, providing tree breeders with an efficient method to enhance selection accuracy and reduce the costs and errors associated with traditional visual phenotyping.
By integrating NIR spectroscopy into breeding programs, foresters can objectively assess disease resistance in real time, supplementing traditional visual methods. The lightweight, field-deployable device allows rapid sampling of hundreds of trees, minimizing labor-intensive evaluations and the risk of undetected infections. Beyond fusiform rust, this proof-of-concept demonstrates how spectroscopy, combined with machine learning, can transform forestry phenotyping. The approach aligns with the growing movement toward precision forestry—leveraging data-driven technologies to sustain healthy, resilient forests amid rising biotic stresses.
###
References
DOI
Original URL
https://doi.org/10.1016/j.plaphe.2025.100066
Funding information
This research was funded by the United States Forest Service, Forest Health Protection Special Technology Development Program (grant number 20-DG-11083150-003) and the Southern Pine Health Research Cooperative (SPHRC) at the University of Georgia (Athens, Georgia, United States). We would like to thank the Cooperative Tree Improvement at NC State University (Raleigh, North Carolina, United States) for providing data, which was made possible because of the establishment, management, and measurement of tests by members of the Cooperative. Funding for the Cooperative was also provided by the Department of Forestry and Environmental Resources in the College of Natural Resources at North Carolina State University and by USDA National Institute of Food and Agriculture McIntire-Stennis Project NCZ04149.
About Plant Phenomics
Plant Phenomics is dedicated to publishing novel research that will advance all aspects of plant phenotyping from the cell to the plant population levels using innovative combinations of sensor systems and data analytics. Plant Phenomics aims also to connect phenomics to other science domains, such as genomics, genetics, physiology, molecular biology, bioinformatics, statistics, mathematics, and computer sciences. Plant Phenomics should thus contribute to advance plant sciences and agriculture/forestry/horticulture by addressing key scientific challenges in the area of plant phenomics.
Journal
Plant Phenomics
Method of Research
Experimental study
Subject of Research
Not applicable
Article Title
Near-infrared spectroscopy as a high-throughput phenotyping method for fusiform rust resistance in loblolly pine
Adaptive bayesian sampling streamlines plant imaging and data efficiency
Nanjing Agricultural University The Academy of Science
By dynamically adjusting sampling frequency based on uncertainty and prior information, this method cuts data acquisition by up to 80% without compromising essential biological insights. Among five tested Bayesian techniques, the Markov Chain Monte Carlo (MCMC) and Gaussian Process (GP) approaches demonstrated the best balance between compression, precision, and computational cost, marking a major step toward efficient, real-time phenotyping.
Advances in plant imaging and computer vision have transformed agriculture and biology by enabling continuous and objective trait quantification. However, monitoring large plant populations or long-term processes—such as germination and growth—creates vast data streams that are costly to produce, process, and store. Fixed-rate sampling, though simple, often results in redundant data because plant growth is nonlinear and varies across species and environments. Adaptive temporal sampling offers a solution by dynamically adjusting measurement timing according to the biological process being observed. Based on these challenges, the research team developed a Bayesian adaptive sampling framework to optimize data collection for non-linear plant growth processes.
A study (DOI: 10.1016/j.plaphe.2025.100067) published in Plant Phenomics on 21 June 2025 by David Rousseau’s team, Université d’Angers, presents a practical solution for high-throughput plant imaging and monitoring tasks, enabling researchers to reduce costs associated with data production, storage, and analysis.
The study employed five Bayesian adaptive sampling techniques—Importance Sampling (IS), Markov Chain Monte Carlo (MCMC), Gaussian Process (GP), Extended Kalman Filter (EKF), and Sequential Importance Resampling Particle Filter (SIR-PF)—to evaluate their efficiency in monitoring seed germination kinetics through time-lapse imaging. Each method dynamically adjusted sampling frequency based on prior data and model uncertainty, allowing the researchers to assess performance across three key dimensions: compression-distortion trade-off, robustness to variations in germination speed, and computational cost. The compression-distortion analysis demonstrated that adaptive sampling could drastically reduce data volume by 80%, achieving a compression ratio of 0.2 while preserving accuracy. Among the tested methods, IS, MCMC, and GP exhibited the lowest distortion for both simulated and real datasets. In robustness tests across fast, normal, and slow germination scenarios, MCMC consistently produced the lowest mean square error (MSE) and global bias, showing strong adaptability to variable biological conditions. GP also performed reliably, offering unbiased parameter estimation even when germination speeds differed from prior expectations. In terms of computation time, EKF was the fastest at 0.02 milliseconds per estimation, while MCMC, though the slowest at 2.6 seconds, maintained computational feasibility for real-time biological monitoring. Analysis of the adaptive threshold σT revealed that while GP and EKF allowed precise control over the number of samples, distortion could not be perfectly constrained, indicating a need for further refinement. Overall, the Bayesian adaptive sampling framework provides an operational, cost-efficient solution for continuous plant monitoring and can be extended to other dynamic biological processes, such as circadian leaf cycles or pathogen spread, with future improvements potentially achieved through advanced non-linear filtering techniques like the Unscented Kalman Filter.
Beyond seed germination, the methodology can be extended to a wide range of time-dependent biological phenomena such as circadian leaf movement, seedling emergence, and disease progression. Its compatibility with non-linear models and low computational requirements make it well-suited for integration with Internet of Things (IoT) systems and automated phenotyping platforms. By optimizing when and how data are collected, this approach supports sustainable, efficient, and scalable agricultural monitoring strategies for the digital farming era.
###
References
DOI
Original URL
https://doi.org/10.1016/j.plaphe.2025.100067
Funding information
This research was funded by La Région des Pays de la Loire under the TANDEM program.
About Plant Phenomics
Plant Phenomics is dedicated to publishing novel research that will advance all aspects of plant phenotyping from the cell to the plant population levels using innovative combinations of sensor systems and data analytics. Plant Phenomics aims also to connect phenomics to other science domains, such as genomics, genetics, physiology, molecular biology, bioinformatics, statistics, mathematics, and computer sciences. Plant Phenomics should thus contribute to advance plant sciences and agriculture/forestry/horticulture by addressing key scientific challenges in the area of plant phenomics.
Journal
Plant Phenomics
Method of Research
Experimental study
Subject of Research
Not applicable
Article Title
Bayesian adaptive sampling: A smart approach for affordable germination phenotyping
No comments:
Post a Comment