Nearly half of UK adults happy to use ChatGPT as a counsellor, study finds
More than 4 in 10 adults in the UK are happy to use ChatGPT for their mental health support, new research suggests.
The study, led by Bournemouth University surveyed nearly 31,000 adults in 35 countries about their use of Artificial Intelligence (AI) large language models such as ChatGPT. The research also discovered that:
- One quarter of UK adults would be happy to delegate the role of teaching their children to AI.
- Globally, 45% of people would trust AI models to take on the role of their doctor.
- Three quarters of people surveyed said they would use an AI chat tool as a companion and a friend.
The study has been published in the journal AI and Society.
Dr Ala Yankouskaya, Senior Lecturer in Psychology at Bournemouth University who led the study said: “With the rapid development and mass availability of AI, more people are placing their trust in it. We wanted to learn more about how people would trust generative AI tools, such as ChatGPT, to carry out some of the most important roles in their daily lives.”
AI for mental health support
41% of participants from the UK, and 61% globally, said that they would be happy to using AI for counselling services. The researchers suggest that for the UK, this could be the result of the waiting times many people face to access the mental health services that they need.
“If someone is experiencing depression, they do not want to wait months for an appointment, so instead they can turn to AI,” Dr Yankouskaya said. “However, when I tested some of the tools myself, I found the language used very vague and confusing because the developers are careful not to jump into providing diagnoses. So, it is no substitute for speaking to a health professional.”
The researchers also noted that users were already familiar with NHS chatbots, which use similar AI technology, and this could be normalising their use of AI in other apps such as ChatGPT for their mental health care.
AI as a teacher
A quarter of people in the UK and half of everyone surveyed globally said that they would trust AI to carry out the role of a teacher, which the research team found particularly concerning.
“It really knocked me down when I saw how many people would be willing to delegate AI to the role of teaching their children,” Dr Yankouskaya explained. “We still do not know the long-term effects that using these tools for education could have on children’s memory and cognitive functions. We could be heading to the stage where we are developing children who are good at putting prompts into AI tools but not as good at taking the information in,” she continued.
The researchers were also concerned about the long-term physical effects on the brain if learning information in the traditional way was replaced by excessive search-engine use, and whether this could shrink the hippocampus region of the brain that used for spatial awareness and learning.
AI as a doctor
45% of all respondents and 25% in the UK said that they would trust AI to carry out the role of their doctor. The numbers were particularly higher in countries where healthcare is more expensive and harder to access.
This wasn’t as surprising to the researchers who believe people that live in parts of the world where access to health care services is not readily available, might rely on technology for quick answers.
However, they were cautious about the underlying algorithm used to retain the user’s attention and keep them in a relaxed chat. This might be more harmful for mental health advice, where traditional methods of advice might be to alert the user to specific services such as The Samaritans.
AI as a companion
The highest amount of trust participants were willing to place in AI came in the role of friendship. Over three quarters of people globally and over half of people in the UK said they would talk to ChatGPT as a companion.
The researchers think this is explained by a perceived sense of empathy from generative language tools because they are designed to adapt the tone of their responses to the suit the user’s.
“AI tools come across as a friend who knows you well and understands you,” Dr Yankouskaya explained. “ChatGPT can remember every chat it has had with a user and it feels like a private conversation between them. Nowadays people can be very sensitive to being judged and AI tools are designed to be non-judgemental. This means they can provide the sense of security people need,” she continued.
Dr Yankouskaya and the team concluded that as the prospect of AI playing a bigger role in people’s lives moves from a theoretical prospect to reality, there needs to be more awareness within societies about how generative AI tools work and their limitations. The lack of knowledge about the long-term effects on someone’s memory means caution needs to be applied before they take over roles in education in particular.
More than 4 in 10 adults in the UK are happy to use ChatGPT for their mental health support, new research suggests.
The study, led by Bournemouth University surveyed nearly 31,000 adults in 35 countries about their use of Artificial Intelligence (AI) large language models such as ChatGPT. The research also discovered that:
- One quarter of UK adults would be happy to delegate the role of teaching their children to AI.
- Globally, 45% of people would trust AI models to take on the role of their doctor.
- Three quarters of people surveyed said they would use an AI chat tool as a companion and a friend.
The study has been published in the journal AI and Society.
Dr Ala Yankouskaya, Senior Lecturer in Psychology at Bournemouth University who led the study said: “With the rapid development and mass availability of AI, more people are placing their trust in it. We wanted to learn more about how people would trust generative AI tools, such as ChatGPT, to carry out some of the most important roles in their daily lives.”
AI for mental health support
41% of participants from the UK, and 61% globally, said that they would be happy to using AI for counselling services. The researchers suggest that for the UK, this could be the result of the waiting times many people face to access the mental health services that they need.
“If someone is experiencing depression, they do not want to wait months for an appointment, so instead they can turn to AI,” Dr Yankouskaya said. “However, when I tested some of the tools myself, I found the language used very vague and confusing because the developers are careful not to jump into providing diagnoses. So, it is no substitute for speaking to a health professional.”
The researchers also noted that users were already familiar with NHS chatbots, which use similar AI technology, and this could be normalising their use of AI in other apps such as ChatGPT for their mental health care.
AI as a teacher
A quarter of people in the UK and half of everyone surveyed globally said that they would trust AI to carry out the role of a teacher, which the research team found particularly concerning.
“It really knocked me down when I saw how many people would be willing to delegate AI to the role of teaching their children,” Dr Yankouskaya explained. “We still do not know the long-term effects that using these tools for education could have on children’s memory and cognitive functions. We could be heading to the stage where we are developing children who are good at putting prompts into AI tools but not as good at taking the information in,” she continued.
The researchers were also concerned about the long-term physical effects on the brain if learning information in the traditional way was replaced by excessive search-engine use, and whether this could shrink the hippocampus region of the brain that used for spatial awareness and learning.
AI as a doctor
45% of all respondents and 25% in the UK said that they would trust AI to carry out the role of their doctor. The numbers were particularly higher in countries where healthcare is more expensive and harder to access.
This wasn’t as surprising to the researchers who believe people that live in parts of the world where access to health care services is not readily available, might rely on technology for quick answers.
However, they were cautious about the underlying algorithm used to retain the user’s attention and keep them in a relaxed chat. This might be more harmful for mental health advice, where traditional methods of advice might be to alert the user to specific services such as The Samaritans.
AI as a companion
The highest amount of trust participants were willing to place in AI came in the role of friendship. Over three quarters of people globally and over half of people in the UK said they would talk to ChatGPT as a companion.
The researchers think this is explained by a perceived sense of empathy from generative language tools because they are designed to adapt the tone of their responses to the suit the user’s.
“AI tools come across as a friend who knows you well and understands you,” Dr Yankouskaya explained. “ChatGPT can remember every chat it has had with a user and it feels like a private conversation between them. Nowadays people can be very sensitive to being judged and AI tools are designed to be non-judgemental. This means they can provide the sense of security people need,” she continued.
Dr Yankouskaya and the team concluded that as the prospect of AI playing a bigger role in people’s lives moves from a theoretical prospect to reality, there needs to be more awareness within societies about how generative AI tools work and their limitations. The lack of knowledge about the long-term effects on someone’s memory means caution needs to be applied before they take over roles in education in particular.
Journal
AI & Society
AI & Society
DOI
Method of Research
Survey
Survey
Subject of Research
People
People
Article Title
Who lets AI take over? Cross-national variation in willingness to delegate socially important roles to artificial intelligence
Who lets AI take over? Cross-national variation in willingness to delegate socially important roles to artificial intelligence
AI disclosure labels may do more harm than good
According to research in JCOM, transparency labels on AI-generated posts may unintentionally amplify misinformation
image:
A new study in JCOM finds they reduce trust in true scientific info while boosting false claims — a “truth–falsity crossover effect” that challenges current transparency policies. #AI #ScienceCommunication
view moreCredit: Federica Sgorbissa - SISSA Medialab
The growing use of AI-generated scientific and science-related content, especially on social media, raises important concerns: these texts may contain false or highly persuasive information that is difficult for users to detect, potentially shaping public opinion and decision-making.
Several jurisdictions and platforms are moving toward clearer disclosure of AI-generated or AI-synthesised content to protect the public. However, a new study published in JCOM warns that these labels may have the opposite effect of what regulators intend, decreasing the credibility of true scientific information while increasing that of false claims.
The Risks of AI-Generated Scientific Content
AI-generated content can be misleading for at least two reasons. First, language models may “hallucinate,” producing plausible but factually incorrect statements. Second, users can deliberately prompt AI systems to create false yet credible messages. For this reason, several countries have introduced transparency obligations requiring online content generated or synthesized by AI to be clearly labeled.
In their new study, Teng Lin, a PhD candidate at the School of Journalism and Communication, University of Chinese Academy of Social Sciences (UCASS), Beijing, and Yiqing Zhang, a Master’s student at the same school, set out to test whether these disclosure labels actually achieve their stated goal of protecting the public from misinformation.
The Experimental Study
“We focused on science-related information shared on social media,” explains Teng.
The experimental study involved 433 participants recruited online through the Credamo platform between March and May 2024. The researchers created four types of social media posts: correct information with or without an AI label, and misinformation with or without an AI label. The texts were adapted using GPT-4 from items published by China’s Science Rumour Debunking Platform, creating both accurate and misleading Weibo-style versions, and were then independently checked by the researchers. Participants were asked to rate the perceived credibility of each post on a scale from 1 to 5. The researchers also measured participants’ negative attitudes toward AI and their level of involvement with the topic.
A Paradoxical Effect
The results revealed a counterintuitive pattern. “Our most important finding is what we call a ‘truth-falsity crossover effect,’” says Teng. “The same AI label pushes credibility in opposite directions depending on whether the information is true or false: it reduces the credibility of true messages and increases the credibility of false ones.” He adds that this does not necessarily mean the effect would be identical across all platforms or formats, but in their experimental setting the pattern was clear.
In this context, AI disclosure does not help people distinguish between true and false information. Instead, it appears to redistribute credibility in a paradoxical way.
Teng and Zhang also found that individual attitudes toward AI play a role. Participants who held more negative views of AI penalized correct information even more strongly when it was labeled as AI-generated. However, even among those with negative attitudes, the credibility boost observed for misinformation did not disappear entirely; it was only partially reduced, and this attenuation was topic-dependent, as it weakened in one topic but was not eliminated overall.
This suggests that so-called “algorithm aversion” does not lead to a uniform rejection of AI-generated content, but rather to a more complex and asymmetric reaction.
The Need for Careful Policy Design
Research like this highlights the need for careful testing before implementing regulatory interventions, as well-intended transparency measures may produce unintended consequences.
“In our paper we put forward some recommendations, although they need further research to be validated,” Teng explains. “One proposal is to implement a dual-labeling approach. Instead of simply stating that the content is AI-generated, the label could also include a disclaimer indicating that the information has not been independently verified, or add a risk warning.” In short, simply informing audiences that a text was generated by AI may not be sufficient on its own.
“Another recommendation is to adopt a graded or categorical labeling system,” Teng adds. “Different types of scientific information carry different levels of risk. For example, medical or health-related information may require a stronger warning, while information about new technologies may involve lower risk. So we suggest using different levels of disclosure depending on the type and risk level of the content.”
The paper Visible Sources and Invisible Risks: Exploring the Impact of AI Disclosure on Perceived Credibility of AI-Generated Content by Teng Lin and Yiqing Zhang is published in the Journal of Science Communcation (JCAP)
Journal
Journal of Science Communication
Method of Research
Experimental study
Subject of Research
People
Article Title
Visible Sources and Invisible Risks: Exploring the Impact of AI Disclosure on Perceived Credibility of AI-Generated Content
Article Publication Date
9-Mar-2026
Governing with AI: a new AI implementation blueprint for policymakers
New global policy brief by the University of Ottawa’s AI + Society Initiative and IVADO offers policymakers actionable insights to successfully integrate AI into government functions
image:
“Bottom-up, problem-driven planning is the only credible way to transform an administration with AI. Without planning, transparency, accountability and oversight, AI in the public sector will only amplify current dysfunctions and feed distrust from public servants and populations," says Dr. Florian Martin-Bariteau, director of the AI + Society Initiative and associate professor of law at the University of Ottawa.
view moreCredit: Faculty of Law, University of Ottawa
Today, around 70% of countries report using artificial intelligence (AI) to improve internal governmental processes, while a third use it to support policy design and implementation. Others are even exploring the possibility of using AI as a substitute to core governmental functions. Yet caution and pragmatic considerations are needed to ensure a successful AI implementation as statistics show that over 80% of AI projects fail.
To support governments facing these challenges, an international group of experts led by Prof. Catherine Régis (IVADO, Université de Montréal) and Prof. Florian Martin-Bariteau (University of Ottawa) analyzed key factors of AI implementation success and failure in the public sector to propose policy guidance for a transformative and resilient public administration in the age of AI and to better guard against the potentially negative effects and risks brought forth by this technology.
Building a resilient public administration in the age of AI
Canada is no stranger to dipping its toes in the AI race with Mark Carney’s government recently using an AI platform to translate and summarize the 11,000 submissions collected during its recent public consultation on the update to its AI strategy and proposing an ambitious deployment of AI in the federal public service.
“Governments deciding to use AI should go slow and steady, while being ambitious from the start. This should not be seen as indecision, but rather as a mark of seriousness and responsibility,” says Dr. Catherine Régis, director of social innovation and international policy at IVADO and professor of law at Université de Montréal.
Outcomes for integrating AI depend less on the technology’s sophistication than on institutional capacity, accountability mechanisms, vendor power relations, and resilience planning.
The policy brief’s authors recommend four courses of action to tackle this implementation:
- Redesign public services around real problems before deploying AI and involve public servants as co-designers to build on proven successes, scaling up what works.
- Invest in institutional capacity through training and cross-functional teams.
- Rebalance power with private sector through collective procurement and collaboration to create and share AI tools that meet requirements.
- Anchor public-sector AI by building a public trust stack around transparency, accountability and oversight, plus resilience.
“Bottom-up, problem-driven planning is the only credible way to transform an administration with AI,” says Dr. Florian Martin-Bariteau, director of the AI + Society Initiative and associate professor of law at the University of Ottawa. “Without planning, transparency, accountability and oversight, AI in the public sector will only amplify current dysfunctions and feed distrust from public servants and populations.”
A global policy initiative
These recommendations have been developed as part of the Global Policy Briefs on AI initiative, a joint endeavour of IVADO, Canada’s leading AI research and knowledge mobilization consortium, and the AI + Society Initiative at the University of Ottawa aiming to provide policymakers with rigorous, actionable public policy recommendations to address major global challenges related to AI. This is the second outcome of the initiative, following last year’s focused on developing a roadmap for protecting democracies in the age of AI.
The global policy brief, “Governing with AI: Four Actions to Build a Transformative and Resilient Public Administration in the Age of AI,” was developed in December 2025 during a week-long policy retreat of AI experts representing North America, South America, Africa, Europe, and Asia. It can be viewed here.
The project was supported by the CEIMIA, the Canada-CIFAR Chair in AI and Human Rights at Mila, and the University of Ottawa Research Chair in Technology and Society. The week-long retreat was organized with the help of the Délégation du Québec à Rome and the Società Italiana per l’Organizzazione Internazionale.
About IVADO
IVADO is an interdisciplinary, cross-sectoral research and knowledge mobilization consortium whose mission is to develop and promote a robust, reasoning and responsible AI. Led by Université de Montréal with four university partners (Polytechnique Montréal, HEC Montréal, Université Laval and McGill University), IVADO brings together research centres, government bodies and industry members to co-build ambitious cross-sectoral initiatives with the goal of fostering a paradigm shift for AI and its adoption.
About the AI + Society Initiative
The AI + Society Initiative defines problems and identifies solutions to essential issues related to AI to support a better understanding and framing of the ethical, legal and societal implications of AI by leveraging a transdisciplinary approach. The Initiative promotes an inclusive research agenda with a specific focus on avoiding the amplification of global digital injustices through AI for affected communities. Led by the University of Ottawa Research Chair in Technology and Society, the Initiative is incubated at the University of Ottawa Centre for Law, Technology and Society, Canada’s premier research hub on technology law, ethics and policy.
Method of Research
Data/statistical analysis
Subject of Research
People
Article Title
Governing with AI: Four Actions to Build a Transformative and Resilient Public Administration
Article Publication Date
9-Mar-2026
Can people distinguish between AI-generated and human speech?
While listeners struggle to distinguish AI-generated versus human speech, their brains rapidly adapt to subtle differences between the two types of sound after short training.
In a collaboration between Tianjin University and the Chinese University of Hong Kong, researchers led by Xiangbin Teng used behavioral and brain activity measures to explore whether people can discern between AI-generated and human speech. The researchers also assessed whether brief training improves this ability. This work is published in eNeuro.
Thirty participants listened to sentences spoken by people or AI-generated voices and judged whether the speakers were human or AI before and after short training. The researchers discovered that study participants were bad at discriminating between the two types of speakers, and that training helped only minimally. However, on a neural level, training made the brain’s responses more distinct for human versus AI speech. What might that mean? Says Teng, “The auditory brain system seems to start picking up subtle acoustic differences, even if people can’t reliably turn that into a behavioral decision yet. That’s encouraging—it suggests training can help, and it’s a promising starting point for building better ways to distinguish deepfake speech from real human speech. Humans are still adapting to AI-generated content so poor performance doesn’t mean the signals aren’t there—it may mean we’re not yet using the right cues.”
###
Please contact media@sfn.org for the full-text PDF.
About eNeuro
eNeuro is an online, open-access journal published by the Society for Neuroscience. Established in 2014, eNeuro publishes a wide variety of content, including research articles, short reports, reviews, commentaries and opinions.
About The Society for Neuroscience
The Society for Neuroscience is the world's largest organization of scientists and physicians devoted to understanding the brain and nervous system. The nonprofit organization, founded in 1969, now has nearly 35,000 members in more than 95 countries.
Journal
eNeuro
Subject of Research
People
Article Title
Short-Term Perceptual Training Modulates Neural Responses to Deepfake Speech but Does Not Improve Behavioral Discrimination
Article Publication Date
9-Mar-2026
Can artificial intelligence help reduce the carbon footprint of weather forecasting models?
Wiley
Weather prediction has rapidly changed in recent years with the emergence of forecasting systems that leverage artificial intelligence. Such AI models display an impressive computational speed-up of weather forecasts compared with traditional models. New research published in Weather assessed the energy consumption, and therefore the carbon footprint, of such weather forecasting models.
Investigators found that the training aspect of AI models consumes considerable energy, but this consumption is offset by the models’ rapid forecasting ability compared with traditional models. Considering one-year usage, AI data-driven models are estimated to consume at least 21 times less energy than traditional models. The findings suggest opportunities to significantly reduce the carbon footprint of weather forecasting.
“This study provides simple orders of magnitude on the energy consumption of AI in meteorology,” said corresponding author Thomas Rieutord, PhD, who conducted this work while at Met Éireann, in Ireland, and is currently at the Centre National de Recherche Meteorologique, in France. “We hope there will be future studies on the topic to provide more accurate estimates, so that developers of future weather models will have energy consumption reduction as a target, alongside with models' performances.”
URL upon publication: https://onlinelibrary.wiley.com/doi/10.1002/wea.70035
Additional Information
NOTE: The information contained in this release is protected by copyright. Please include journal attribution in all coverage. For more information or to obtain a PDF of any study, please contact: Sara Henning-Stout, newsroom@wiley.com.
About the Journal
Weather publishes articles written for a broad audience, including those having a professional and a general interest in the weather, as well as those working in related fields such as climate science, oceanography, hydrometeorology and other related atmospheric and environmental sciences.
About Wiley
Wiley is a global leader in authoritative content and research intelligence for the advancement of scientific discovery, innovation, and learning. With more than 200 years at the center of the scholarly ecosystem, Wiley combines trusted publishing heritage with AI-powered platforms to transform how knowledge is discovered, accessed, and applied. From individual researchers and students to Fortune 500 R&D teams, Wiley enables the transformation of scientific breakthroughs into real-world impact. From knowledge to impact—Wiley is redefining what's possible in science and learning. Visit us at Wiley.com and Investors.Wiley.com. Follow us on Facebook, X, LinkedIn and Instagram.
Journal
Weather
Article Title
Energy and carbon footprint considerations for data-driven weather forecasting models
Article Publication Date
11-Mar-2026
No comments:
Post a Comment