Friday, February 26, 2021

What might sheep and driverless cars have in common? Following the herd

Researchers show how social component of moral decision-making can influence programming of autonomous vehicles and other technologies

UNIVERSITY OF SOUTHERN CALIFORNIA

Research News

Psychologists have long found that people behave differently than when they learn of peers' actions. A new study by computer scientists found that when individuals in an experiment about autonomous vehicles were informed that their peers were more likely to sacrifice their own safety to program their vehicle to hit a wall rather than hit pedestrians who were at risk, the percentage of individuals willing to sacrifice their own safety increased by approximately two-thirds.

As computer scientists train machines to act as people's agents in all sorts of situations, the study's authors indicate that the social component of decision-making is often overlooked. This could be of great consequence, note the paper's authors who show that the trolly problem -long shown to be the scenario moral psychologists turn to--is problematic. The problem, the authors indicate, fails to show the complexity of how humans make decisions.

Jonathan Gratch, one of the paper's author, the principal investigator for this project, and a computer scientist at the USC Institute for Creative Technologies, says existing models assume that in high stakes life and death decisions, people think differently than they actually do. He indicates that there are not moral absolutes for human decision-making but rather "it is more nuanced," says Gratch.

The researchers conducted four separate simulation experiments to understand how people might process and act on the moral dilemmas they would face as an operator of a driverless car. The first three experiments focused on human behavior when faced with risk to themselves and others in the event of negative scenario in which the vehicle would have to be programmed to either the car to hit the wall or hit five pedestrians. The authors prove that participants would use severity of injury to self and the risk to others as guideposts for decision-making. They found that the higher the risk to pedestrians; the more likely people were likely to self -sacrifice their own health. In addition, the level of risk to pedestrians did not to have to be as high as for the operator of the autonomous vehicle to sacrifice their own well-being.

In the fourth experiment, the researchers added a social dimension telling participants what peers had opted to do in the same situation. In one simulation, the knowledge that peers chose to risk their own health changed the participants' responses, rising from 30 percent who were willing to risk their health to 50 percent. But this can go both ways cautions Gratch. "Technically there are two forces at work. When people realize their peers don't care, this pulls people down to selfishness. When they realize they care, this pulls them up."

The research has implications for autonomous vehicles including drones and boats, as well as robots that are programmed by humans. The authors suggest that it is important for manufacturers to have an awareness of how humans actually make decisions in life or death situations. In addition, the authors imply that transparency in how machines are programmed as well as relinquishing controls to the human drivers so that they might change settings prior to these life and death situations are important for the public. They also suggest it is important for legislators to be aware of how vehicles might be programmed. Lastly, given the human susceptibility to conform to social norms, the authors believe that public health campaigns related to how peers programmed their autonomous vehicles for self-sacrifice might influence future owners to change their vehicle settings to be more oriented to protecting others from injury and choose self-sacrifice.

###

The authors of this study are Celso M. de Melo of the US Army Research Laboratory, Stacy Marsella of Northeastern University, and Jonathan Gratch of the USC Institute for Creative Studies.

No comments: