A.I.
UVA and the Toyota Research Institute aim to give your car the power to reason
UNIVERSITY OF VIRGINIA SCHOOL OF ENGINEERING AND APPLIED SCIENCE
Self-driving cars are coming, but will you really be OK sitting passively while a 2,000-pound autonomous robot motors you and your family around town?
Would you feel more secure if, while autonomous technology is perfected over the next few years, your semi-autonomous car could explain to you what it’s doing — for example, why it suddenly braked when you didn’t?
Better yet, what if it could help your teenager not only learn to drive, but to drive more safely?
Yen-Ling Kuo, the Anita Jones Faculty Fellow and assistant professor of computer science at the University of Virginia School of Engineering and Applied Science, is training machines to use human language and reasoning to be capable of doing all of that and more. The work is funded by a two-year Young Faculty Researcher grant from the Toyota Research Institute.
“This project is about how artificial intelligence can understand the meaning of drivers’ actions through language modeling and use this understanding to augment our human capabilities,” Kuo said.
“By themselves, robots aren’t perfect, and neither are we. We don’t necessarily want machines to take over for us, but we can work with them for better outcomes.”
Eliminating the Need to Program Every Scenario
To reach that level of cooperation, you need machine learning models that imbue robots with generalizable reasoning skills.
That’s “as opposed to collecting large datasets to train for every scenario, which will be expensive, if not impossible,” Kuo said.
Kuo is collaborating with a team at the Toyota Research Institute to build language representations of driving behavior that enable a robot to associate the meaning of words with what it sees by watching how humans interact with the environment or by its own interactions with the environment.
Let’s say you’re an inexperienced driver, or maybe you grew up in Miami and moved to Boston. A car that helps you drive on icy roads would be handy, right?
This new intelligence will be especially important for handling out-of-the-ordinary circumstances, such as helping inexperienced drivers adjust to road conditions or guiding them through challenging situations.
“We would like to apply the learned representations in shared autonomy. For example, the AI can describe a high-level intention of turning right without skidding and give guidance to slow to a certain speed while turning right,” Kuo said. “If the driver doesn’t slow enough, the AI will adjust the speed further, or if the driver’s turn is too sharp, the AI will correct for it.”
Kuo will develop the language representations from a variety of data sources, including from a driving simulator she is building for her lab this summer.
Her work is being noticed. Kuo recently gave an invited talk on related research at the Association for the Advancement of Artificial Intelligence’s New Faculty Highlights 2024 program. She also has a forthcoming paper, “Learning Representations for Robust Human-Robot Interaction,” slated for publication in AI Magazine.
Advancing Human-Centered AI
Kuo’s proposal closely aligns with the Toyota Research Institute’s goals for advancing human-centered AI, interactive driving and robotics.
“Once language-based representations are learned, their semantics can be used to share autonomy between humans and vehicles or robots, promoting usability and teaming,” said Kuo’s co-investigator, Guy Rosman, who manages the institute’s Human Aware Interaction and Learning team.
“This harnesses the power of language-based reasoning into driver-vehicle interactions that better generalize our notion of common sense, well beyond existing approaches,” Rosman said.
That means if you ever do hand the proverbial keys over to your car, the trust enabled by Kuo’s research should help you steer clear of any worries.
Berkeley Lab researchers advance AI-driven plant root analysis
Enhancing biomass assessment and plant root growth monitoring in hydroponic systems
In a world striving for sustainability, understanding the hidden half of a living plant – the roots – is crucial. Roots are not just an anchor; they are a dynamic interface between the plant and soil, critical for water uptake, nutrient absorption, and, ultimately, the survival of the plant. In an investigation to boost agricultural yields and develop crops resilient to climate change, scientists from Lawrence Berkeley National Laboratory’s (Berkeley Lab’s) Applied Mathematics and Computational Research (AMCR) and Environmental Genomics and Systems Biology (EGSB) Divisions have made a significant leap. Their latest innovation, RhizoNet, harnesses the power of artificial intelligence (AI) to transform how we study plant roots, offering new insights into root behavior under various environmental conditions.
This pioneering tool, detailed in a study published on June 5 in Scientific Reports, revolutionizes root image analysis by automating the process with exceptional accuracy. Traditional methods, which are labor-intensive and prone to errors, fall short when faced with the complex and tangled nature of root systems. RhizoNet steps in with a state-of-the-art deep learning approach, enabling researchers to track root growth and biomass with precision. Using an advanced deep learning-based backbone based on a convolutional neural network, this new computational tool semantically segments plant roots for comprehensive biomass and growth assessment, changing the way laboratories can analyze plant roots and propelling efforts toward self-driving labs.
As Berkeley Lab’s Daniela Ushizima, lead investigator of the AI-driven software, explained, “The capability of RhizoNet to standardize root segmentation and phenotyping represents a substantial advancement in the systematic and accelerated analysis of thousands of images. This innovation is instrumental in our ongoing efforts to enhance the precision in capturing root growth dynamics under diverse plant conditions.”
Getting to the Roots
Root analysis has traditionally relied on flatbed scanners and manual segmentation methods, which are not only time-consuming but also susceptible to errors, particularly in extensive multi-plant studies. Root image segmentation also presents significant challenges due to natural phenomena like bubbles, droplets, reflections, and shadows. The intricate nature of root structures and the presence of noisy backgrounds further complicate the automated analysis process. These complications are particularly acute at smaller spatial scales, where fine structures are sometimes only as wide as a pixel, making manual annotation extremely challenging even for expert human annotators.
EGSB recently introduced the latest version (2.0) of EcoFAB, a novel hydroponic device that facilitates in-situ plant imaging by offering a detailed view of plant root systems. EcoFAB – developed via a collaboration between EGSB, the DOE Joint Genome Institute (JGI), and the Climate & Ecosystem Sciences division at Berkeley Lab – is part of an automated experimental system designed to perform fabricated ecosystem experiments that enhance data reproducibility. RhizoNet, which processes color scans of plants grown in EcoFAB that are subjected to specific nutritional treatments, addresses the scientific challenges of plant root analysis. It employs a sophisticated Residual U-Net architecture (an architecture used in semantic segmentation that improves upon the original U-Net by adding residual connections between input and output blocks within the same level, i.e. resolution, in both the encoder and decoder pathways) to deliver root segmentation specifically adapted for EcoFAB conditions, significantly enhancing prediction accuracy. The system also integrates a convexification procedure that serves to encapsulate identified roots from time series and helps quickly delineate the primary root components from complex backgrounds. This integration is key for accurately monitoring root biomass and growth over time, especially in plants grown under varied nutritional treatments in EcoFABs.
To illustrate this, the new Scientific Reports paper details how the researchers used EcoFAB and RhizoNet to process root scans of Brachypodium distachyon (a small grass species) plants subjected to different nutrient deprivation conditions over approximately five weeks. These images, taken every three to seven days, provide vital data that help scientists understand how roots adapt to varying environments. The high-throughput nature of EcoBOT, the new image acquisition system for EcoFABs, offers research teams the potential for systematic experimental monitoring – as long as data is analyzed promptly.
“We’ve made a lot of progress in reducing the manual work involved in plant cultivation experiments with the EcoBOT, and now RhizoNet is reducing the manual work involved in analyzing the data generated,” noted Peter Andeer, a research scientist in EGSB and a lead developer of EcoBOT, who collaborated with Ushizima on this work. “This increases our throughput and moves us toward the goal of self-driving labs.” Resources at the National Energy Research Scientific Computing Center (NERSC) – a U.S. Department of Energy (DOE) user facility located at Berkeley Lab – were used to train RhizoNet and perform inference, bringing this capability of computer vision to the EcoBOT, Ushizima noted.
“EcoBOT is capable of collecting images automatically, but it was unable to determine if how athe plant responds to different environmental changes alive or not or growing or not,” Ushizima explained. “By measuring the roots with RhizoNet, we capture detailed data on root biomass and growth not solely to determine plant vitality but to provide comprehensive, quantitative insights that are not readily observable through conventional means. After training the model, it can be reused for multiple experiments (unseen plants).”
“In order to analyze the complex plant images from the EcoBOT, we created a new convolutional neural network for semantic segmentation," added Zineb Sordo, a computer systems engineer in AMCR working as a data scientist on the project. "Our goal was to design an optimized pipeline that uses prior information about the time series to improve the model's accuracy beyond manual annotations done on a single frame. RhizoNet handles noisy images, detecting plant roots from images so biomass and growth can be calculated.”
One Patch at a Time
During model tuning, the findings indicated that using smaller image patches significantly enhances the model's performance. In these patches, each neuron in the early layers of the artificial neural network has a smaller receptive field. This allows the model to capture fine details more effectively, enriching the latent space with diverse feature vectors. This approach not only improves the model's ability to generalize to unseen EcoFAB images but also increases its robustness, enabling it to focus on thin objects and capture intricate patterns despite various visual artifacts.
Smaller patches also help prevent class imbalance by excluding sparsely labeled patches – those with less than 20% of annotated pixels, predominantly background. The team’s results show high accuracy, precision, recall, and Intersection over Union (IoU) for smaller patch sizes, demonstrating the model's improved ability to distinguish roots from other objects or artifacts.
To validate the performance of root predictions, the paper compares predicted root biomass to actual measurements. Linear regression analysis revealed a significant correlation, underscoring the precision of automated segmentation over manual annotations, which often struggle to distinguish thin root pixels from similar-looking noise. This comparison highlights the challenge human annotators face and showcases the advanced capabilities of the RhizoNet models, particularly when trained on smaller patch sizes.
This study demonstrates the practical applications of RhizoNet in current research settings, the authors noted, and lays the groundwork for future innovations in sustainable energy solutions as well as carbon-sequestration technology using plants and microbes. The research team is optimistic about the implications of their findings.
“Our next steps involve refining RhizoNet’s capabilities to further improve the detection and branching patterns of plant roots,” said Ushizima. "We also see potential in adapting and applying these deep-learning algorithms for roots in soil as well as new materials science investigations. We're exploring iterative training protocols, hyperparameter optimization, and leveraging multiple GPUs. These computational tools are designed to assist science teams in analyzing diverse experiments captured as images, and have applicability in multiple areas.”
Further research work in plant root growth dynamics is described in a pioneering book on autonomous experimentation edited by Ushizima and Berkeley Lab colleague Marcus Noack that was released in 2023. Other team members from Berkeley Lab include Peter Andeer, Trent Northen, Camille Catoulos, and James Sethian. This multidisciplinary group of scientists is part of Twin Ecosystems, a DOE Office of Science Genomic Science Program project that integrates computer vision software and autonomous experimental design software developed at Berkeley Lab (gpCAM) with an automated experimental system (EcoFAB and EcoBOT) to perform fabricated ecosystem experiments and enhance data reproducibility. The work of analyzing plant roots under different kinds of nutrition and environmental conditions is also part of the DOE’s Carbon Negative Earthshot initiative (see sidebar).
JOURNAL
Scientific Reports
METHOD OF RESEARCH
Computational simulation/modeling
SUBJECT OF RESEARCH
Not applicable
ARTICLE TITLE
Berkeley Lab Researchers Advance AI-Driven Plant Root Analysis
ARTICLE PUBLICATION DATE
20-Jun-2024
Can AI learn like us?
COLD SPRING HARBOR LABORATORY
It reads. It talks. It collates mountains of data and recommends business decisions. Today’s artificial intelligence might seem more human than ever. However, AI still has several critical shortcomings.
“As impressive as ChatGPT and all these current AI technologies are, in terms of interacting with the physical world, they’re still very limited. Even in things they do, like solve math problems and write essays, they take billions and billions of training examples before they can do them well, " explains Cold Spring Harbor Laboratory (CSHL) NeuroAI Scholar Kyle Daruwalla.
Daruwalla has been searching for new, unconventional ways to design AI that can overcome such computational obstacles. And he might have just found one.
The key was moving data. Nowadays, most of modern computing’s energy consumption comes from bouncing data around. In artificial neural networks, which are made up of billions of connections, data can have a very long way to go. So, to find a solution, Daruwalla looked for inspiration in one of the most computationally powerful and energy-efficient machines in existence—the human brain.
Daruwalla designed a new way for AI algorithms to move and process data much more efficiently, based on how our brains take in new information. The design allows individual AI “neurons” to receive feedback and adjust on the fly rather than wait for a whole circuit to update simultaneously. This way, data doesn’t have to travel as far and gets processed in real time.
“In our brains, our connections are changing and adjusting all the time,” Daruwalla says. “It’s not like you pause everything, adjust, and then resume being you.”
The new machine-learning model provides evidence for a yet unproven theory that correlates working memory with learning and academic performance. Working memory is the cognitive system that enables us to stay on task while recalling stored knowledge and experiences.
“There have been theories in neuroscience of how working memory circuits could help facilitate learning. But there isn’t something as concrete as our rule that actually ties these two together. And so that was one of the nice things we stumbled into here. The theory led out to a rule where adjusting each synapse individually necessitated this working memory sitting alongside it, " says Daruwalla.
Daruwalla’s design may help pioneer a new generation of AI that learns like we do. That would not only make AI more efficient and accessible—it would also be somewhat of a full-circle moment for neuroAI. Neuroscience has been feeding AI valuable data since long before ChatGPT uttered its first digital syllable. Soon, it seems, AI may return the favor.
JOURNAL
Frontiers in Computational Neuroscience
DOI
We may soon be able to detect cancer with AI
A new paper in Biology Methods & Protocols, published by Oxford University Press, indicates that it may soon be possible for doctors to use artificial intelligence (AI) to detect and diagnose cancer in patients, allowing for earlier treatment. Cancer remains one of the most challenging human diseases, with over 19 million cases and 10 million deaths annually. The evolutionary nature of cancer makes it difficult to treat late-stage tumours.
Genetic information is encoded in DNA by patterns of the four bases—denoted by A, T, G and C—that make up its structure. Environmental changes outside the cell can cause some DNA bases to be modified by adding a methyl group. This process is called “DNA methylation.” Each individual cell possesses millions of these DNA methylation marks. Researchers have observed changes to these marks in early cancer development; they could assist in early diagnosis of cancer. It’s possible to examine which bases in DNA are methylated in cancers and to what extent, compared to healthy tissue. Identifying the specific DNA methylation signatures indicative of different cancer types is akin to searching for a needle in a haystack. This is where the researchers involved in this study believe that AI can help.
Investigators from Cambridge University and Imperial College London trained an AI mode, using a combination of machine and deep learning, to look at the DNA methylation patterns and identify 13 different cancer types (including breast, liver, lung, and prostate cancers) from non-cancerous tissue with 98.2% accuracy. This model relies on tissue samples (not DNA fragments in blood) and would need additional training and testing on a more diverse collection of biopsy samples to be ready for clinical use. The researchers here believe that an important aspect of this study was the use of an explainable and interpretable core AI model, which provided insights into the reasoning behind its predictions. The researchers explored the inner workings of their model and showed that the model reinforces and enhances understanding of the underlying processes contributing to cancer.
Identifying these unusual methylation patterns (potentially from biopsies) would allow health care providers to detect cancer early. This could potentially improve patient outcomes dramatically, as most cancers are treatable or curable if detected early enough.
“Computational methods such as this model, through better training on more varied data and rigorous testing in the clinic, will eventually provide AI models that can help doctors with early detection and screening of cancers,” said the paper’s lead author, Shamith Samarajiwa. “This will provide better patient outcomes.”
The paper, “Early detection and diagnosis of cancer with interpretable machine learning to uncover cancer-specific DNA methylation patterns,” is available (on June 20th) at https://doi.org/10.1093/biomethods/bpae028.
Direct correspondence to:
Shamith Samarajiwa
Department of Metabolism, Digestion and Reproduction, Faculty of Medicine,
Imperial College London
Du Cane Road
London W12 0NN, UNITED KINGDOM
s.samarajiwa@imperial.ac.uk
To request a copy of the study, please contact:
Daniel Luzer
daniel.luzer@oup.com
JOURNAL
Biology Methods and Protocols
METHOD OF RESEARCH
Experimental study
SUBJECT OF RESEARCH
People
ARTICLE TITLE
Early detection and diagnosis of cancer with interpretable machine learning to uncover cancer-specific DNA methylation patterns
ARTICLE PUBLICATION DATE
20-Jun-2024
No comments:
Post a Comment