How to build your own robot friend: Making AI education more accessible
USC researchers develop new open-source platform to help students build their own low-cost robot companion from scratch
From smart virtual assistants and self-driving cars to digital health and fraud prevention systems, AI technology is transforming almost every aspect of our daily lives—and education is no different. For all its promise, the rise of AI, like any new technology, raises some pressing ethical and equity questions.
How can we ensure that such a powerful tool can be accessed by all students regardless of background?
Inspired by this call to action, USC researchers have created a low-cost, accessible learning kit to help college and high school students build their own “robot friend.” Students can personalize the robot’s “body,” program the robot to mimic their head posture, and learn about AI ethics and fairness in an engaging, accessible way.
“We’re proposing this open-source model to not only improve education in AI for all students but also to make human-interaction research more affordable for labs and research institutions,” said Shi. “Ultimately, we want to increase access to human-centered AI education for college students and create a pathway to more accessible research.”
To reduce costs and development time for learners, the team customized and simplified Blossom, a small, open-source robot originally developed by Hoffman at Cornell University. Blossom is a common fixture in USC’s Interaction Lab—Shi previously used the robot to design better AI voices for mindfulness exercises, while O’Connell programmed it to act as a “study buddy” for students with ADHD symptoms.
Last year, the duo began to devise ways to use the robot for educational purposes and set to work creating a low-cost, customizable and “human-focused” module that could mirror some of the ways that students will interact with technology in their everyday lives.
The system is outlined in a new study, titled “Build Your Own Robot Friend: An Open-Source Learning Module for Accessible and Engaging AI Education,” presented this week at the AAAI Conference on Artificial Intelligence, education symposium track.
“We believe it is important for students to learn about fairness and ethics in AI in the same way that we learned about math and physics in K-12,” said co-lead author Zhonghao Shi, a doctoral student in computer science who conducts his research in the USC Interaction Lab led by Professor Maja Matarić. “We may not use these subjects every day, but having a basic understanding of these concepts helps us do better work and be mindful of new technologies.”
Supported by the National Science Foundation, the paper is co-lead-authored by Amy O’Connell, a USC computer science doctoral student and Shi’s labmate, and Zongjian Li, a software engineer working with Mohammad Soleymani, a research associate professor at the USC Institute for Creative Technologies. Soleymani and Matarić are co-authors, in addition to Guy Hoffman from Cornell University, and USC computer science undergraduate students Siqi Liu and Jennifer Ayissi.
Hands-on experience
The three-part open-source learning module provides students with hands-on experience and introductory instruction about various aspects of AI, including robotics, machine learning, software engineering, and mechanical engineering. It helps to address a gap in the market for AI education, said Shi and O’Connell.
Currently, pre-built robots, such as the NAO, are unaffordable for schools with limited resources, while educational robot kits, such as LEGO Mindstorms, though affordable, do not adapt to students at different levels.
To make the robot more affordable, they developed strategies to subsidize its cost. In the version of Blossom presented in the study, the materials are created using 3D printers, instead of more costly laser printing. Currently, one of the team’s customizable robots costs around $250 to make. In comparison, a NAO robot runs at around $15,000.
O’Connell, who learned to crochet during the pandemic, designed five new Blossom exteriors and created detailed, easy-to-follow patterns and tutorials for each version, including a baby onesie, knitted and crocheted options, which are all low-cost and customizable.
After constructing their robot friend, students are encouraged to further customize Blossom with, for instance, mechanical eyebrows, color-changing lights, or even an expressive face screen. For O’Connell, creativity has been a crucial part of her own engineering journey.
“Crafting and engineering require similar strengths like counting, planning, and spatial reasoning,” said O’Connell. “By incorporating crafting into this project, we hope to draw in creative students who might not have considered how their skills align with robotics and engineering.”
Understanding ethics and fairness
The system was piloted in a 2-day workshop in May 2023 with 15 undergraduate college students from a local minority-serving institution. Four teams of students constructed Blossom robots following the learning module assembly guide with blank knitted exterior to personalize with accessories. On the second day, the students used pre-trained head pose tracking and gesture recognition models to detect and mimic nodding behaviors from the user.
From post-workshop surveys, they found that 92% of the participants believed that the workshop helped them learn more about the topics covered and all the participants believed that the workshop encouraged them to study more about robotics and AI in the future.
“Equipping users with AI literacy, including an understanding of AI ethics and fairness, is crucial to avoid unintended discrimination against marginalized groups,” said Shi.
In continued work, the team plans to further evaluate and improve the module for high school students and K-12 students. Ultimately, the researchers hope to expand access for students at different educational levels.
“We’re excited to share more about our project with people from around the world,” Shi said. “We want to make sure that people from different kinds of socioeconomic backgrounds have the opportunity to gain an education on AI and participate in the process of improving AI for future use.”
ARTICLE TITLE
Build Your Own Robot Friend: An Open-Source Learning Module for Accessible and Engaging AI Education
ARTICLE PUBLICATION DATE
22-Feb-2024
Diversifying data to beat bias
USC researchers propose a novel approach to mitigate bias in machine learning model training
Peer-Reviewed PublicationAI holds the potential to revolutionize healthcare, but it also brings with it a significant challenge: bias. For instance, a dermatologist might use an AI-driven system to help identify suspicious moles. But what if the machine learning model was trained primarily on image data from lighter skin tones, and misses a common form of skin cancer on a darker-skinned patient?
This is a real-world problem. In 2021, researchers found that free image databases that could be used to train AI systems to diagnose skin cancer contain very few images of people with darker skin. It turns out, AI is only as good as its data, and biased data can lead to serious outcomes, including unnecessary surgery and even missing treatable cancers.
In a new paper published at the AAAI Conference on Artificial Intelligence this week, USC computer science researchers propose a novel approach to mitigate bias in machine learning model training, specifically in image generation.
The researchers used a family of algorithms, called “quality-diversity algorithms” or QD algorithms, to create diverse synthetic datasets that can strategically “plug the gaps” in real-world training data.
The paper, titled “Quality-Diversity Generative Sampling for Learning with Synthetic Data,” was lead-authored by Allen Chang, a senior double majoring in computer science and applied math, with co-authors doctoral student Matthew Fontaine, USC computer science Stefanos Nikolaidis, Fluor Early Career Chair in Engineering and Assistant Professor of Computer Science, Maja Matarić, Chan Soon-Shiong Chair and Distinguished Professor of Computer Science, Neuroscience, and Pediatrics, and Massachusetts Institute of Technology doctoral graduate Serena Booth.
“I think it is our responsibility as computer scientists to better protect all communities, including minority or less frequent groups, in the systems we design,” said Chang. “We hope that quality-diversity optimization can help to generate fair synthetic data for broad impacts in medical applications and other types of AI systems.”
Increasing fairness
While generative AI models have been used to create synthetic data in the past, “there’s a danger of producing biased data, which can further bias downstream models, creating a vicious cycle,” said Chang.
Quality diversity algorithms, on the other hand, are typically used to generate diverse solutions to a problem, for instance, helping robots explore unknown environments, or generating game levels in a video game. In this case, the algorithms were put to work in a new way: to solve the problem of creating diverse synthetic datasets.
Using this method, the team was able to generate a diverse dataset of around 50,000 images in 17 hours, around 20 times more efficiently than traditional methods of “rejection sampling,” said Chang. The team tested the dataset on up to four measures of diversity—skin tone, gender presentation, age, and hair length.
“We found that training data produced with our method has the potential to increase fairness in the machine learning model, increasing accuracy on faces with darker skin tones, while maintaining accuracy from training on additional data,” said Chang.
“This is a promising direction for augmenting models with bias-aware sampling, which we hope can help AI systems perform accurately for all users.”
Notably, the method increases the representation of intersectional groups—a term for groups with multiple identities—in the data. For instance, people who have both dark skin tones and wear eyeglasses, which would be especially limited traits in traditional real-world datasets.
"While there has been previous work on leveraging QD algorithms to generate diverse content, we show for the first time that generative models can use QD to repair biased classifiers," said Nikolaidis. "They do this by iteratively generating and rebalancing content across user-specified features, using the newly balanced content to improve classifier fairness. This work is a first step in the direction of enabling biased models to 'self-repair'' by iteratively generating and retraining on synthetic data."
ARTICLE TITLE
Quality-Diversity Generative Sampling for Learning with Synthetic Data
ARTICLE PUBLICATION DATE
22-Feb-2024
No comments:
Post a Comment