Key to making robots social: Human interaction, not design
CORNELL UNIVERSITY
ITHACA, N.Y. – Researchers who develop social robots – ones that people interact with – focus too much on design features and not enough on sociological factors, according to Cornell and Indiana University scholars.
“If we want to understand what makes a robot social, we have to look at the broader scope of the communities around robots and people’s interactions with each other,” said Malte Jung, co-author and associate professor of information science at Cornell. “Sociality is constructed through interactions people have with each other around the machine. It’s not just about programming a better character for the robot, making it respond better to human social features, making it look cuter or behaving more naturally.”
The research was based on field work by Waki Kamino, a doctoral student in the field of information science and the paper’s lead author, who spent months immersed in Tokyo’s robot-friendly culture.
Her work informed one of the paper’s primary findings: In Japan, manufacturers and robot owners together helped establish new norms for robots as social agents. Companies incorporated familiar designs into their robots and brought owners together by hosting sponsored events, while owners made their robots part of everyday interactions with friends and met up regularly in public spaces, robots in tow.
The research team calls on the field of human-robot interaction (HRI) to consider a broader sociological view when designing and building robot companions.
“Traditionally, HRI research has always looked at just this one interaction between one person and one robot,” Jung said. “We really have to look at the broader scope of the communities around people's interactions with each other and take all of this into consideration.”
“Waki’s research shows that using robots doesn’t mean you’re isolating yourself with the robot,” said Selma Å abanović, professor at Indiana University and a paper co-author. “Interacting with robots is actually a social practice that you do together with others.”
For additional information, see this Cornell Chronicle story.
-30-
DOI
Advancing the safety of AI-driven machinery requires closer collaboration with humans
TAMPERE UNIVERSITY
An ongoing research project at Tampere University aims to create adaptable safety systems for highly automated off-road mobile machinery to meet industry needs. Research has revealed critical gaps in compliance with legislation related to public safety when using mobile working machines controlled by artificial intelligence.
As the adoption of highly automated off-road machinery increases, so does the need for robust safety measures. Conventional safety processes often fail to consider the health and safety risks posed by systems controlled by artificial intelligence (AI).
Marea de Koning, a doctoral researcher specialising in automation at Tampere University, conducts research with the aim of ensuring public safety without compromising technological advancements by developing a safety framework specifically tailored for autonomous mobile machines operating in collaboration with humans. This framework intents to enable original equipment manufacturers (OEM), safety & system engineers, and industry stakeholders to create safety systems that comply with evolving legislation.
Balance between humans and autonomous machines
Anticipating all the possible ways a hazard can emerge and ensuring that the AI can safely manage hazardous scenarios is practically impossible. We need to adjust our approach to safety to focus more on finding ways to successfully manage unforeseen events.
We need robust risk management systems, often incorporating a human-in-the-loop safety option. Here a human supervisor is expected to intervene when necessary. But in autonomous machinery, relying on human intervention is impractical. According to de Koning, there can be measurable degradations in human performance when automation is used due to, for example, boredom, confusion, cognitive capacities, loss of situational awareness, and automation bias. These factors significantly impact safety, and a machine must become capable of safely managing its own behaviour.
“My approach considers hazards with AI-driven decision-making, risk assessment, and adaptability to unforeseen scenarios. I think it is important to actively engage with industry partners to ensure real-world applicability. By collaborating with manufacturers, it is possible to bridge the gap between theoretical frameworks and practical implementation,” she says.
The framework intents to support OEMs in designing and developing compliant safety systems and ensure that their products adhere to evolving regulations.
Integrating framework to existing machinery
Marea de Koning started her research in November 2020 and will finish it by November 2024. The project is funded partly by the Doctoral School of Industry Innovations and partly by a Finnish system supplier.
De Koning’s next research project, starting in April, will focus on integrating a subset of her safety framework and rigorously testing its effectiveness. Regulation 2023/1230 replaces Directive 2006/42/ec as of January 2027, significantly challenging OEMs.
“I’m doing everything I can to ensure that safety remains at the forefront of technological advancements,” she concludes.
The research provides valuable insights for policymakers, engineers and safety professionals. The article presenting the findings titled A Comprehensive Approach to Safety for Highly Automated Off-Road Machinery under Regulation 2023/1230 was published in the prestigious Journal of Safety Science.
JOURNAL
Safety Science
SUBJECT OF RESEARCH
Not applicable
ARTICLE TITLE
A comprehensive approach to safety for highly automated off-road machinery under Regulation 2023/1230
No comments:
Post a Comment