Monday, June 24, 2024

A.I.

UVA and the Toyota Research Institute aim to give your car the power to reason




UNIVERSITY OF VIRGINIA SCHOOL OF ENGINEERING AND APPLIED SCIENCE

UVA Link Lab Driving Simulator 

IMAGE: 

YEN-LING KUO, AN ASSISTANT PROFESSOR OF COMPUTER SCIENCE, IS BUILDING A DRIVING SIMULATOR, SIMILAR TO THIS ONE IN UVA ENGINEERING’S LINK LAB, TO COLLECT DATA ON DRIVING BEHAVIOR. SHE’LL USE THE DATA TO ENABLE A ROBOT’S AI TO ASSOCIATE THE MEANING OF WORDS WITH WHAT IT SEES BY WATCHING HOW HUMANS INTERACT WITH THE ENVIRONMENT OR BY ITS OWN INTERACTIONS WITH THE ENVIRONMENT.

view more 

CREDIT: GRAEME JENVEY/UNIVERSITY OF VIRGINIA SCHOOL OF ENGINEERING AND APPLIED SCIENCE





Self-driving cars are coming, but will you really be OK sitting passively while a 2,000-pound autonomous robot motors you and your family around town?

Would you feel more secure if, while autonomous technology is perfected over the next few years, your semi-autonomous car could explain to you what it’s doing — for example, why it suddenly braked when you didn’t? 

Better yet, what if it could help your teenager not only learn to drive, but to drive more safely? 

Yen-Ling Kuo, the Anita Jones Faculty Fellow and assistant professor of computer science at the University of Virginia School of Engineering and Applied Science, is training machines to use human language and reasoning to be capable of doing all of that and more. The work is funded by a two-year Young Faculty Researcher grant from the Toyota Research Institute.

“This project is about how artificial intelligence can understand the meaning of drivers’ actions through language modeling and use this understanding to augment our human capabilities,” Kuo said.

“By themselves, robots aren’t perfect, and neither are we. We don’t necessarily want machines to take over for us, but we can work with them for better outcomes.”

Eliminating the Need to Program Every Scenario

To reach that level of cooperation, you need machine learning models that imbue robots with generalizable reasoning skills.

That’s “as opposed to collecting large datasets to train for every scenario, which will be expensive, if not impossible,” Kuo said.

Kuo is collaborating with a team at the Toyota Research Institute to build language representations of driving behavior that enable a robot to associate the meaning of words with what it sees by watching how humans interact with the environment or by its own interactions with the environment.

Let’s say you’re an inexperienced driver, or maybe you grew up in Miami and moved to Boston. A car that helps you drive on icy roads would be handy, right?

This new intelligence will be especially important for handling out-of-the-ordinary circumstances, such as helping inexperienced drivers adjust to road conditions or guiding them through challenging situations.

“We would like to apply the learned representations in shared autonomy. For example, the AI can describe a high-level intention of turning right without skidding and give guidance to slow to a certain speed while turning right,” Kuo said. “If the driver doesn’t slow enough, the AI will adjust the speed further, or if the driver’s turn is too sharp, the AI will correct for it.”

Kuo will develop the language representations from a variety of data sources, including from a driving simulator she is building for her lab this summer.

Her work is being noticed. Kuo recently gave an invited talk on related research at the Association for the Advancement of Artificial Intelligence’s New Faculty Highlights 2024 program. She also has a forthcoming paper, “Learning Representations for Robust Human-Robot Interaction,” slated for publication in AI Magazine.

Advancing Human-Centered AI

Kuo’s proposal closely aligns with the Toyota Research Institute’s goals for advancing human-centered AI, interactive driving and robotics. 

“Once language-based representations are learned, their semantics can be used to share autonomy between humans and vehicles or robots, promoting usability and teaming,” said Kuo’s co-investigator, Guy Rosman, who manages the institute’s Human Aware Interaction and Learning team.

“This harnesses the power of language-based reasoning into driver-vehicle interactions that better generalize our notion of common sense, well beyond existing approaches,” Rosman said.

That means if you ever do hand the proverbial keys over to your car, the trust enabled by Kuo’s research should help you steer clear of any worries.


Berkeley Lab researchers advance AI-driven plant root analysis


Enhancing biomass assessment and plant root growth monitoring in hydroponic systems



\

DOE/LAWRENCE BERKELEY NATIONAL LABORATORY

RhizoNet harnesses the power of AI to transform how we study plant roots 

IMAGE: 

DEVELOPED BY BERKELEY LAB RESEARCHERS, RHIZONET IS A NEW COMPUTATIONAL TOOL THAT HARNESSES THE POWER OF AI TO TRANSFORM HOW WE STUDY PLANT ROOTS, OFFERING NEW INSIGHTS INTO ROOT BEHAVIOR UNDER VARIOUS ENVIRONMENTAL CONDITIONS. IT WORKS IN CONJUNCTION WITH ECOFAB, A NOVEL HYDROPONIC DEVICE THAT FACILITATES IN-SITU PLANT IMAGING BY OFFERING A DETAILED VIEW OF PLANT ROOT SYSTEMS.

view more 

CREDIT: THOR SWIFT, LAWRENCE BERKELEY NATIONAL LABORATORY




In a world striving for sustainability, understanding the hidden half of a living plant – the roots – is crucial. Roots are not just an anchor; they are a dynamic interface between the plant and soil, critical for water uptake, nutrient absorption, and, ultimately, the survival of the plant. In an investigation to boost agricultural yields and develop crops resilient to climate change, scientists from Lawrence Berkeley National Laboratory’s (Berkeley Lab’s) Applied Mathematics and Computational Research (AMCR) and Environmental Genomics and Systems Biology (EGSB) Divisions have made a significant leap. Their latest innovation, RhizoNet, harnesses the power of artificial intelligence (AI) to transform how we study plant roots, offering new insights into root behavior under various environmental conditions.

This pioneering tool, detailed in a study published on June 5 in Scientific Reports, revolutionizes root image analysis by automating the process with exceptional accuracy. Traditional methods, which are labor-intensive and prone to errors, fall short when faced with the complex and tangled nature of root systems. RhizoNet steps in with a state-of-the-art deep learning approach, enabling researchers to track root growth and biomass with precision. Using an advanced deep learning-based backbone based on a convolutional neural network, this new computational tool semantically segments plant roots for comprehensive biomass and growth assessment, changing the way laboratories can analyze plant roots and propelling efforts toward self-driving labs.

As Berkeley Lab’s Daniela Ushizima, lead investigator of the AI-driven software, explained, “The capability of RhizoNet to standardize root segmentation and phenotyping represents a substantial advancement in the systematic and accelerated analysis of thousands of images. This innovation is instrumental in our ongoing efforts to enhance the precision in capturing root growth dynamics under diverse plant conditions.” 

Getting to the Roots

Root analysis has traditionally relied on flatbed scanners and manual segmentation methods, which are not only time-consuming but also susceptible to errors, particularly in extensive multi-plant studies. Root image segmentation also presents significant challenges due to natural phenomena like bubbles, droplets, reflections, and shadows. The intricate nature of root structures and the presence of noisy backgrounds further complicate the automated analysis process. These complications are particularly acute at smaller spatial scales, where fine structures are sometimes only as wide as a pixel, making manual annotation extremely challenging even for expert human annotators.

EGSB recently introduced the latest version (2.0) of EcoFAB, a novel hydroponic device that facilitates in-situ plant imaging by offering a detailed view of plant root systems. EcoFAB – developed via a collaboration between EGSB, the DOE Joint Genome Institute (JGI), and the Climate & Ecosystem Sciences division at Berkeley Lab – is part of an automated experimental system designed to perform fabricated ecosystem experiments that enhance data reproducibility. RhizoNet, which processes color scans of plants grown in EcoFAB that are subjected to specific nutritional treatments, addresses the scientific challenges of plant root analysis. It employs a sophisticated Residual U-Net architecture (an architecture used in semantic segmentation that improves upon the original U-Net by adding residual connections between input and output blocks within the same level, i.e. resolution, in both the encoder and decoder pathways) to deliver root segmentation specifically adapted for EcoFAB conditions, significantly enhancing prediction accuracy. The system also integrates a convexification procedure that serves to encapsulate identified roots from time series and helps quickly delineate the primary root components from complex backgrounds. This integration is key for accurately monitoring root biomass and growth over time, especially in plants grown under varied nutritional treatments in EcoFABs.

To illustrate this, the new Scientific Reports paper details how the researchers used EcoFAB and RhizoNet to process root scans of Brachypodium distachyon (a small grass species) plants subjected to different nutrient deprivation conditions over approximately five weeks. These images, taken every three to seven days, provide vital data that help scientists understand how roots adapt to varying environments. The high-throughput nature of EcoBOT, the new image acquisition system for EcoFABs, offers research teams the potential for systematic experimental monitoring – as long as data is analyzed promptly. 

“We’ve made a lot of progress in reducing the manual work involved in plant cultivation experiments with the EcoBOT, and now RhizoNet is reducing the manual work involved in analyzing the data generated,” noted Peter Andeer, a research scientist in EGSB and a lead developer of EcoBOT, who collaborated with Ushizima on this work. “This increases our throughput and moves us toward the goal of self-driving labs.” Resources at the National Energy Research Scientific Computing Center (NERSC) – a U.S. Department of Energy (DOE) user facility located at Berkeley Lab – were used to train RhizoNet and perform inference, bringing this capability of computer vision to the EcoBOT, Ushizima noted.

“EcoBOT is capable of collecting images automatically, but it was unable to determine if how athe plant responds to different environmental changes alive or not or growing or not,” Ushizima explained. “By measuring the roots with RhizoNet, we capture detailed data on root biomass and growth not solely to determine plant vitality but to provide comprehensive, quantitative insights that are not readily observable through conventional means. After training the model, it can be reused for multiple experiments (unseen plants).”

“In order to analyze the complex plant images from the EcoBOT, we created a new convolutional neural network for semantic segmentation," added Zineb Sordo, a computer systems engineer in AMCR working as a data scientist on the project. "Our goal was to design an optimized pipeline that uses prior information about the time series to improve the model's accuracy beyond manual annotations done on a single frame. RhizoNet handles noisy images, detecting plant roots from images so biomass and growth can be calculated.”

One Patch at a Time

During model tuning, the findings indicated that using smaller image patches significantly enhances the model's performance. In these patches, each neuron in the early layers of the artificial neural network has a smaller receptive field. This allows the model to capture fine details more effectively, enriching the latent space with diverse feature vectors. This approach not only improves the model's ability to generalize to unseen EcoFAB images but also increases its robustness, enabling it to focus on thin objects and capture intricate patterns despite various visual artifacts.

Smaller patches also help prevent class imbalance by excluding sparsely labeled patches – those with less than 20% of annotated pixels, predominantly background. The team’s results show high accuracy, precision, recall, and Intersection over Union (IoU) for smaller patch sizes, demonstrating the model's improved ability to distinguish roots from other objects or artifacts.

To validate the performance of root predictions, the paper compares predicted root biomass to actual measurements. Linear regression analysis revealed a significant correlation, underscoring the precision of automated segmentation over manual annotations, which often struggle to distinguish thin root pixels from similar-looking noise. This comparison highlights the challenge human annotators face and showcases the advanced capabilities of the RhizoNet models, particularly when trained on smaller patch sizes.

This study demonstrates the practical applications of RhizoNet in current research settings, the authors noted, and lays the groundwork for future innovations in sustainable energy solutions as well as carbon-sequestration technology using plants and microbes. The research team is optimistic about the implications of their findings. 

“Our next steps involve refining RhizoNet’s capabilities to further improve the detection and branching patterns of plant roots,” said Ushizima. "We also see potential in adapting and applying these deep-learning algorithms for roots in soil as well as new materials science investigations. We're exploring iterative training protocols, hyperparameter optimization, and leveraging multiple GPUs. These computational tools are designed to assist science teams in analyzing diverse experiments captured as images, and have applicability in multiple areas.” 

Further research work in plant root growth dynamics is described in a pioneering book on autonomous experimentation edited by Ushizima and Berkeley Lab colleague Marcus Noack that was released in 2023. Other team members from Berkeley Lab include Peter Andeer, Trent Northen, Camille Catoulos, and James Sethian. This multidisciplinary group of scientists is part of Twin Ecosystems, a DOE Office of Science Genomic Science Program project that integrates computer vision software and autonomous experimental design software developed at Berkeley Lab (gpCAM) with an automated experimental system (EcoFAB and EcoBOT) to perform fabricated ecosystem experiments and enhance data reproducibility. The work of analyzing plant roots under different kinds of nutrition and environmental conditions is also part of the DOE’s Carbon Negative Earthshot initiative (see sidebar).

No comments:

Post a Comment