Saturday, April 08, 2023

Artificial intelligence: ChatGPT statements can influence users’ moral judgements

Peer-Reviewed Publication

SCIENTIFIC REPORTS

Human responses to moral dilemmas can be influenced by statements written by the artificial intelligence chatbot ChatGPT, according to a study published in Scientific Reports. The findings indicate that users may underestimate the extent to which their own moral judgements can be influenced by the chatbot.

Sebastian Krügel and colleagues asked ChatGPT (powered by the artificial intelligence language processing model Generative Pretrained Transformer 3) multiple times whether it is right to sacrifice the life of one person in order to save the lives of five others. They found that ChatGPT wrote statements arguing both for and against sacrificing one life, indicating that it is not biased towards a certain moral stance. The authors then presented 767 US participants, who were on average 39 years old, with one of two moral dilemmas that required them to choose whether to sacrifice one person’s life to save five others. Before answering, participants read a statement provided by ChatGPT arguing either for or against sacrificing one life to save five. Statements were attributed to either a moral advisor or to ChatGPT. After answering, participants were asked whether the statement they read influenced their answers.

The authors found that participants were more likely to find sacrificing one life to save five acceptable or unacceptable, depending on whether the statement they read argued for or against the sacrifice. This was true even the statement was attributed to a ChatGPT. These findings suggest that participants may have been influenced by the statements they read, even when they were attributed to a chatbot.

80% of participants reported that their answers were not influenced by the statements they read. However, the authors found that the answers participants believed they would have provided without reading the statements were still more likely to agree with the moral stance of the statement they did read than with the opposite stance. This indicates that participants may have underestimated the influence of ChatGPT’s statements on their own moral judgements.

The authors suggest that the potential for chatbots to influence human moral judgements highlights the need for education to help humans better understand artificial intelligence. They propose that future research could design chatbots that either decline to answer questions requiring a moral judgement or answer these questions by providing multiple arguments and caveats.

###

Article details

ChatGPT’s inconsistent moral advice influences users’ judgment

DOI: 10.1038/s41598-023-31341-0

Corresponding Author:

Sebastian Krügel
Technische Hochschule Ingolstadt, Ingolstadt, Germany
Email: sebastian.kruegel@thi.de

Please link to the article in online versions of your report (the URL will go live after the embargo ends): https://www.nature.com/articles/s41598-023-31341-0.

Convenience, control among benefits that inspire automated feature use


Peer-Reviewed Publication

PENN STATE

UNIVERSITY PARK, Pa. — People often complain about the occasional misfires of automated features, such as autocorrect, but users generally enjoy interacting with the tools, according to researchers. They added that focusing on certain benefits of automated features may help developers build automated tools that people use more and complain about less.

In a study, researchers said that users appreciate the convenience and control of automated features, which also include YouTube’s autoplay and Google Gmail’s smart compose. People listed the technology’s ability to learn about their personal preferences as another reason they like automated tools.

“We are awash in automated features,” said S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory at Penn State University. “Although we crave for more and more interactive media, and enjoy interactivity in our daily digital experiences, we also value these automated features, which are highly popular. So, there's a bit of a contradiction. On the one hand, we want to be involved. But on the other hand, we want the systems to do their own thing.”

According to the researchers, because automated tools offer more convenience and control, users may not mind losing some of their ability to interact with the technology.

“Automated features can make a device or tool easier to use and frees users from constantly engaging in repetitive tasks,” said Chris "Cheng" Chen, assistant professor in communication and design, Elon University, and first author of the study.

Chen, a former doctoral student in mass communication at Penn State, added that people also appreciate the ability of automated features to remember and learn from previous interactions, or “system-initiated personalization,” she said. This feature saves users from manually adding their preferences to the system.

According to the researchers, users tend to complain about automated features when these features interfere too much with their ability to interact smoothly with their devices.

Developers and designers, therefore, may want to consider designing systems that carefully blend interactivity and automation, also referred to as interpassivity, said Sundar, who is also an affiliate of Penn State’s Institute for Computational and Data Sciences and director of Penn State’s Center for Socially Responsible Artificial Intelligence.

“Interpassivity is a delicate combination of automation and interactivity,” said Sundar. “It's not just one or the other. On the one hand, we want things to be automated, and to reduce tedious tasks, which we are happy to outsource to the machine. But, we also want to reserve the right to interact and be notified so that we can to provide consent for the system to engage in this automation process.”

While convenience may often be the most obvious benefit of automated features, Sundar said that developers should also consider other gratifications as they design these services.

“Automated features are meant to give users more convenience, but designers need to keep in mind that there are these other aspects like the user control that people desire, in order for current automated features — as well as other ones that will be developed in the future — to be successful,” Sundar added.

For example, Sundar said, many of the current complaints about automated features derive from a feeling of powerlessness to change settings and a lack of consent.

"When autocorrecting our e-mail messages or autocompleting our sentences, our smartphones tend to go with their version, requiring the user to go through extra steps to over-ride system suggestions,” said Sundar. Affording easy control should be considered an important design consideration, he added.

The researchers used both focus groups and a survey to study people’s reactions to automated features. They conducted three online focus groups with a total of 18 participants, in which they were asked participants about their met and unmet needs when using automated features.

The responses from the focus groups shaped the survey, which was administered to 498 participants on an online crowdsourcing platform. Those participants were asked about 11 automated features in their daily media experience: autofill, autosave, auto-suggestions, autocomplete, auto-importing, auto-scrolling, smart replies and smart compose, auto-tagging, auto-correct, grayscale and autoplay.

The study found that users perceive higher convenience from autosave compared to grayscale, auto-scrolling and autoplay. Autosave was also rated higher for remembering users’ preferences than autofill and grayscale. However, user control was rated as equally important for all automated features.

Sangwook Lee, a doctoral student in mass communication at Penn State, worked with Chen and Sundar.

The researchers published their findings recently in the journal Behaviour & Information Technology.

No comments: