Wednesday, March 11, 2026

Nearly half of UK adults happy to use ChatGPT as a counsellor, study finds


Bournemouth University




More than 4 in 10 adults in the UK are happy to use ChatGPT for their mental health support, new research suggests. 

The study, led by Bournemouth University surveyed nearly 31,000 adults in 35 countries about their use of Artificial Intelligence (AI) large language models such as ChatGPT. The research also discovered that: 

  • One quarter of UK adults would be happy to delegate the role of teaching their children to AI. 
  • Globally, 45% of people would trust AI models to take on the role of their doctor. 
  • Three quarters of people surveyed said they would use an AI chat tool as a companion and a friend.   

The study has been published in the journal AI and Society.  

Dr Ala Yankouskaya, Senior Lecturer in Psychology at Bournemouth University who led the study said: “With the rapid development and mass availability of AI, more people are placing their trust in it. We wanted to learn more about how people would trust generative AI tools, such as ChatGPT, to carry out some of the most important roles in their daily lives.”  

AI for mental health support  

41% of participants from the UK, and 61% globally, said that they would be happy to using AI for counselling services. The researchers suggest that for the UK, this could be the result of the waiting times many people face to access the mental health services that they need.  

“If someone is experiencing depression, they do not want to wait months for an appointment, so instead they can turn to AI,” Dr Yankouskaya said. “However, when I tested some of the tools myself, I found the language used very vague and confusing because the developers are careful not to jump into providing diagnoses. So, it is no substitute for speaking to a health professional.” 

The researchers also noted that users were already familiar with NHS chatbots, which use similar AI technology, and this could be normalising their use of AI in other apps such as ChatGPT for their mental health care. 

 

AI as a teacher 

A quarter of people in the UK and half of everyone surveyed globally said that they would trust AI to carry out the role of a teacher, which the research team found particularly concerning.  

“It really knocked me down when I saw how many people would be willing to delegate AI to the role of teaching their children,” Dr Yankouskaya explained. “We still do not know the long-term effects that using these tools for education could have on children’s memory and cognitive functions. We could be heading to the stage where we are developing children who are good at putting prompts into AI tools but not as good at taking the information in,” she continued.  

The researchers were also concerned about the long-term physical effects on the brain if learning information in the traditional way was replaced by excessive search-engine use, and whether this could shrink the hippocampus region of the brain that used for spatial awareness and learning. 

 

AI as a doctor 

45% of all respondents and 25% in the UK said that they would trust AI to carry out the role of their doctor. The numbers were particularly higher in countries where healthcare is more expensive and harder to access.  

This wasn’t as surprising to the researchers who believe people that live in parts of the world where access to health care services is not readily available, might rely on technology for quick answers.  

However, they were cautious about the underlying algorithm used to retain the user’s attention and keep them in a relaxed chat. This might be more harmful for mental health advice, where traditional methods of advice might be to alert the user to specific services such as The Samaritans. 

 

AI as a companion 

The highest amount of trust participants were willing to place in AI came in the role of friendship. Over three quarters of people globally and over half of people in the UK said they would talk to ChatGPT as a companion.  

The researchers think this is explained by a perceived sense of empathy from generative language tools because they are designed to adapt the tone of their responses to the suit the user’s. 

“AI tools come across as a friend who knows you well and understands you,” Dr Yankouskaya explained. “ChatGPT can remember every chat it has had with a user and it feels like a private conversation between them. Nowadays people can be very sensitive to being judged and AI tools are designed to be non-judgemental. This means they can provide the sense of security people need,” she continued.  

Dr Yankouskaya and the team concluded that as the prospect of AI playing a bigger role in people’s lives moves from a theoretical prospect to reality, there needs to be more awareness within societies about how generative AI tools work and their limitations. The lack of knowledge about the long-term effects on someone’s memory means caution needs to be applied before they take over roles in education in particular.  

Journal

DOI

Method of Research

Subject of Research

Article Title

AI disclosure labels may do more harm than good



According to research in JCOM, transparency labels on AI-generated posts may unintentionally amplify misinformation




Sissa Medialab

AI disclosure warning may backfire 

image: 

A new study in JCOM finds they reduce trust in true scientific info while boosting false claims — a “truth–falsity crossover effect” that challenges current transparency policies. #AI #ScienceCommunication

view more 

Credit: Federica Sgorbissa - SISSA Medialab




The growing use of AI-generated scientific and science-related content, especially on social media, raises important concerns: these texts may contain false or highly persuasive information that is difficult for users to detect, potentially shaping public opinion and decision-making.

Several jurisdictions and platforms are moving toward clearer disclosure of AI-generated or AI-synthesised content to protect the public. However, a new study published in JCOM warns that these labels may have the opposite effect of what regulators intend, decreasing the credibility of true scientific information while increasing that of false claims.

The Risks of AI-Generated Scientific Content

AI-generated content can be misleading for at least two reasons. First, language models may “hallucinate,” producing plausible but factually incorrect statements. Second, users can deliberately prompt AI systems to create false yet credible messages. For this reason, several countries have introduced transparency obligations requiring online content generated or synthesized by AI to be clearly labeled.

In their new study, Teng Lin, a PhD candidate at the School of Journalism and Communication, University of Chinese Academy of Social Sciences (UCASS), Beijing, and Yiqing Zhang, a Master’s student at the same school, set out to test whether these disclosure labels actually achieve their stated goal of protecting the public from misinformation.

The Experimental Study

“We focused on science-related information shared on social media,” explains Teng.
The experimental study involved 433 participants recruited online through the Credamo platform between March and May 2024. The researchers created four types of social media posts: correct information with or without an AI label, and misinformation with or without an AI label. The texts were adapted using GPT-4 from items published by China’s Science Rumour Debunking Platform, creating both accurate and misleading Weibo-style versions, and were then independently checked by the researchers. Participants were asked to rate the perceived credibility of each post on a scale from 1 to 5. The researchers also measured participants’ negative attitudes toward AI and their level of involvement with the topic.

A Paradoxical Effect

The results revealed a counterintuitive pattern. “Our most important finding is what we call a ‘truth-falsity crossover effect,’” says Teng. “The same AI label pushes credibility in opposite directions depending on whether the information is true or false: it reduces the credibility of true messages and increases the credibility of false ones.” He adds that this does not necessarily mean the effect would be identical across all platforms or formats, but in their experimental setting the pattern was clear.

In this context, AI disclosure does not help people distinguish between true and false information. Instead, it appears to redistribute credibility in a paradoxical way.

Teng and Zhang also found that individual attitudes toward AI play a role. Participants who held more negative views of AI penalized correct information even more strongly when it was labeled as AI-generated. However, even among those with negative attitudes, the credibility boost observed for misinformation did not disappear entirely; it was only partially reduced, and this attenuation was topic-dependent, as it weakened in one topic but was not eliminated overall.

This suggests that so-called “algorithm aversion” does not lead to a uniform rejection of AI-generated content, but rather to a more complex and asymmetric reaction.

The Need for Careful Policy Design

Research like this highlights the need for careful testing before implementing regulatory interventions, as well-intended transparency measures may produce unintended consequences.
“In our paper we put forward some recommendations, although they need further research to be validated,” Teng explains. “One proposal is to implement a dual-labeling approach. Instead of simply stating that the content is AI-generated, the label could also include a disclaimer indicating that the information has not been independently verified, or add a risk warning.” In short, simply informing audiences that a text was generated by AI may not be sufficient on its own.

“Another recommendation is to adopt a graded or categorical labeling system,” Teng adds. “Different types of scientific information carry different levels of risk. For example, medical or health-related information may require a stronger warning, while information about new technologies may involve lower risk. So we suggest using different levels of disclosure depending on the type and risk level of the content.”

The paper Visible Sources and Invisible Risks: Exploring the Impact of AI Disclosure on Perceived Credibility of AI-Generated Content by Teng Lin and Yiqing Zhang is published in the Journal of Science Communcation (JCAP)

No comments: