By Dr. Tim Sandle
SCIENCE EDITOR
DIGITAL JOURNAL
December 29, 2025
December 29, 2025

Chatbots are seen as one of the greatest annoyances - Copyright AFP OLIVIER MORIN
AI continues to advance across all fields and in differing ways, at different levels of sophistication. However, our acceptance of AI depends on the context in which we seek to use it. Meal recommendations is one thing, morality is another.
This is not least because studies have demonstrated how AI systems tend to take on human biases and amplify them. In addition, people interacting with biased AI systems can then become even more biased themselves, creating a potential snowball effect wherein minute biases in original datasets become amplified by the AI, which increases the biases of the person using the AI.
As an example, AI systems like ChatGPT can develop ‘us versus them’ biases similar to humans — showing favouritism toward their perceived ‘ingroup’ while expressing negativity toward ‘outgroups’.
Artificial moral advisors
Artificial moral advisors (AMAs) are systems based on artificial intelligence (AI) that are starting to be designed to assist humans in making moral decisions based on established ethical theories, principles, or guidelines. While prototypes are being developed, at present AMAs are not yet being used to offer consistent, bias-free recommendations and rational moral advice.
Yet as such machines powered by artificial intelligence increase in their technological capacities and move into the moral domain it is critical that governments and technologists understand how people think about such artificial moral advisors.
It would appear there is some way to go.
Research from the University of Kent’s School of Psychology has explored how people would perceive these advisors and if they would trust their judgement, in comparison with human advisors.
The study found that while artificial intelligence might have the potential to offer impartial and rational advice, people still do not fully trust it to make ethical decisions on moral dilemmas.
A significant aversion
The research shows that people have a significant aversion to AMAs (compared with humans) giving moral advice, even when the advice given is identical, while also showing that this is particularly the case when advisors — human and AI alike — gave advice based on utilitarian principles (actions that could positively impact the majority).
Advisors who gave non-utilitarian advice (e.g. adhering to moral rules rather than maximising outcomes) were trusted more, especially in dilemmas involving direct harm. This suggests that people value advisors — human or AI — who align with principles that prioritise individuals over abstract outcomes.
Even when participants agreed with the AMA’s decision, they still anticipated disagreeing with AI in the future, indicating inherent scepticism.
This is perhaps because trusting AI in such matters is not simply about the level of accuracy or the degree of consistency; it also about the aligning with human values and expectations.
The research appears in the journal Cognition, titled “People expect artificial moral advisors to be more utilitarian and distrust utilitarian moral advisors.”
FacebookTwitterLinkedInEmailShare
No comments:
Post a Comment