Thursday, September 15, 2022

Can a robot's ability to speak affect how much human users trust it?

Can a robot’s ability to speak affect how much human users trust it?
Epi the humanoid robot used by Krantz and his colleagues, is a humanoid 
robotics platform used for human-robot interaction experiments and cognitive
 modelling. It was developed at Lund University Cognitive Science. 
Credit: Krantz, Balkenius & Johansson.

As robots become increasingly advanced, they are likely to find their way into many real-world settings, including homes, offices, malls, airports, health care facilities, and assisted living spaces. To promote their widespread use and implementation, however, roboticists should ensure that robots are well-perceived and trusted by humans.

Researchers at Lund University in Sweden have recently carried out a study aimed at better understanding what affects a human user's trust in robots. Their paper, set to appear in the proceedings for the SCRITA workshop at IEEE Ro-man 2022, specifically tried to determine whether a humanoid 's ability to speak can impact a human user's trust in it.

"The idea for the paper came about after we found some unexpected results in a previous experiment," Amandus Krantz, one of the researchers who carried out the study, told TechXplore. "We were investigating how faulty gaze behavior may impact trust in a humanoid social robot. The results showed a significant difference in trust before and after interaction with the robot across all conditions, but no decrease in trust from the faulty behavior. The only component that was unchanged between the conditions was a short speech from the robot."

Previous literature in robotics suggests that humans' trust in  can depend on how intelligent they perceive them to be. Based on the findings they gathered in their previous study, Krantz and his colleagues thus started to reflect on the possibility that a robot's ability to speak, which could be perceived as intelligence, influences how much a human user trusts the robot.

"We theorized that perhaps the speech component was increasing the perceived intelligence of the robot, enough that the resulting trust change masked the trust change from the faulty behavior," Krantz said.

To test their hypothesis, the researchers re-ran the same experiment they carried out in their previous work, but in which the robot did not speak. They found that when the robot did not speak, users tended to trust it less and notice its faulty behavior. This suggests that the robot's ability to speak could in fact increase the participants' trust in it.

"Each of our study participants was shown a video of a humanoid robot displaying either faulty or non-faulty behavior and either speaking or being mute," Krantz explained. "When speaking, the robot would give some facts about one of a series of objects that were presented to it. After seeing this video, the participants were given a range of questionnaires designed to estimate their trust in the robot, along with their perceptions about the robots' intelligence, likability, and animacy (how alive the robot seems)."

The researchers conducted their experiments online, engaging 227 participants. When they analyzed the participants' responses, they found that overall, the non-faulty robots were the most trusted. Interestingly, however, when a faulty robot could talk, participants reported trusting it almost as much as non-faulty robots.

"As far as we know, this is the first study that has investigated how the ability to speak impacts trust," Krantz said. "There are some similar studies, but they tend to investigate the effect of the contents of the speech (usually apologizing for an error), rather than possessing the ability to speak. As for practical implications, the results indicate that implementing some form of human-like speech component may be beneficial for manufacturers of consumer robots (such as robotic vacuum cleaners) who are looking to reduce disuse of their robots following an error in operation."

The recent work by this team of researchers offers valuable and interesting insight about how a robot's ability to speak can affect how humans perceive it and relate to it. In the future, their findings could encourage robotics companies and developers to place a greater emphasis on a robot's speech, as a means to increase potential users' trust in it.

"The experiments outlined in the paper were carried out online, which is known to potentially cause slightly different results from physical human-robot interaction experiments, so we are planning a follow-up study where participants interact with the robot in a real-world setting," Krantz said. "We are also planning a range of studies that investigates how  is affected by other aspects of a humanoid robot, such as gaze or pupil dilation/constriction."Study explores how a robot's inner speech affects a human user's trust

More information: Amandus Krantz, Christian Balkenius, Birger Johansson, Using speech to reduce loss of trust in humanoid social robots. arXiv:2208.13688v1 [cs.RO], arxiv.org/abs/2208.13688

© 2022 Science X Network

No comments: