Monday, September 08, 2025

How the Orthodox church became a hybrid warfare weapon in Moldova's elections


Copyright AP Photo

By Euronews Romania
Published on 05/09/2025 


The Moldovan government and the EU denounced a Russian disruption campaign ahead of the 28 September elections in the country. Security experts in Chișinău say the Russian and the local Orthodox Churches are playing a destabilising role.

The EU has warned that Moldova has once again become a primary target of Russian disinformation campaigns ahead of 28 September's parliamentary elections.

Authorities in Chișinău have identified several ways in which Moscow is trying to influence the geopolitical orientation of the country of over 2 million people, most of whom hold Romanian EU passports.

According to Moldovan security experts, one of the tactics employed by Russian hybrid warfare strategists is to use the Russian Orthodox Church, which has a presence in Moldova and is allegedly being used as a tool to spread Moscow's propaganda and contribute to organising disinformation campaigns.

"At this stage, we have 10 major areas in which Russia is acting and attempting to destabilise the Republic of Moldova and, for example, the use of the church in our country for propaganda and disinformation purposes in the interests of the Russian Federation", said Daniel Vodă, a spokesperson of the Moldovan government.


Orthodox believers attend a service in Chișinău, 13 March, 2022 AP Photo

The Central Election Commission has already recorded instances in which priests were involved in political propaganda activities. The electoral authority in Chișinău warns that the involvement of the church in the election campaign is contrary to the law and has called on representatives of religious denominations to refrain from political activities.

In Moldova, there is no independent Orthodox Church, as in other Orthodox countries. The Moldovan Church is an autonomous episcopal body under the Moscow Patriarchate ecclesial authority.

According to Andrei Curăraru, an analyst of the NGO WatchDog, the clear objective of Russia is to slow down or permanently halt Moldova's European accession and keep it within Moscow's orbit "using this geopolitical weapon that is the Metropolitan Church of Moldova (under the Moscow Patriachate) over the minds and votes of people who frequently attend mass in the Republic of Moldova."

Is the EU the antintode to Russian propaganda?

Seen from Brussels, Moldova's upcoming elections will be a crucial date for the future of the country. If pro-Russian parties win the elections and rule the country, negotiations to join the EU are likely to be frozen, in a similar U-turn to Georgia last year.

The current pro-EU Moldovan government has repeatedly asked the EU to decouple the accession negotiations from those with Ukraine in order to obtain a fast-track process.

However, the EU is reluctant to do so in order to avoid sending negative signals to Kyiv.

According to EU Enlargement Commissioner Marta Kos, who was in Chișinău this week, Moldova has fulfilled all the criteria to start accession negotiations.

However, speaking at a press conference, Kos noted: "Moldova did the homework from a technical point of view, which is my responsibility but of course we also need the political support of the member states."

French President Emmanuel Macron shakes hands with Moldova's President Maia Sandu during Independence Day celebrations in Chișinău, 27 August, 2025 AP Photo

Several governments remains cautious and ambivalent when it comes to decoupling the Moldovan and Ukrainian enlargement processes.

The EU Council president António Costa said on Thursday in Bucharest that the EU member states agreed to open "pre-accession negotiations" with Moldova after the forthcoming elections.

On 9 September, Moldova's president Maia Sandu will speak in the EU Parliament in Strasbourg about the dangers and the risks of Russian interference in her country and make a last ditch attempt to persuade member states to give the green-light to starting accession negotiations.
















Algae grown on dairy effluent cuts mineral fertiliser use by 25%, scientists say


Copyright AP Photo

By Diego Giuliani
Published on 05/09/2025


Researchers are developing bio-based fertilisers that reduce pollution, save energy and could curb Europe’s reliance on Russian imports. One promising solution: algae grown on wastewater.

In western France, farmers are experimenting with an unconventional fertiliser: a powder made from algae grown on wastewater.

The results are encouraging: when mixed with mineral fertilisers, this bio-based product can reduce their use by up to 25%, without sacrificing yields.

"We grew unicellular algae on dairy effluents from a food processing plant," explains Orhan Grignon, agriculture and environment advisor at the Chamber of Agriculture in Charente-Maritime.

"The algae feed on the organic matter in the wastewater, turning it into plant biomass. We then dehydrate that biomass and spread it on fields as a fertiliser, since it’s naturally rich in nitrogen."

The tests, carried out on wheat plots, compared algae powder with mineral fertilisers and other organic products. The verdict: algae alone doesn't match mineral fertilisers in terms of yield, but when combined with them, it delivers the same results, while cutting mineral fertiliser use by a quarter.

An aerial view of the almost dried-up Miljacka River and algae peeking through amid a heat wave and drought in Sarajevo, 10 August, 2025 AP Photo

However, there are challenges. Unlike mineral fertilisers, which release nitrogen instantly and are easy to dose, algae powder works more slowly.

"Managing it requires anticipation and more expertise from farmers," says Grignon. Still, its potential is clear. And because it's dehydrated, it can be transported further and used in areas where spreading sewage sludge, another organic fertiliser, is restricted.

The tests were carried out within WALNUT, a European project aimed at giving wastewater a second life.

"Our main objective is treating different kinds of wastewaters, such as industrial effluents, urban wastewater, or brines," explains its coordinator, Francisco Corona Encinas. "By applying a circular approach, we not only reduce the pollutant load of these processes but also add value to the nutrients contained in them—using these nutrients as bio-fertilisers in agriculture.

One promising example comes from Ourense, northern Spain, home to one of Europe's most advanced water treatment plants.

Children cool off in the Mino River in Ourense, 30 August, 2024 AP Photo

Here, technicians and researchers are putting nutrient recovery into practice on a large scale.

"In this facility of nearly 30,000 square meters, more than 600 litres of urban wastewater arrive every second," explains Alicia González Míguez, project manager at CETAQUA.

"Here, water from taps, sinks, and toilets goes through advanced purification before returning to the river. But we don’t just remove harmful compounds—we also recover valuable nutrients like nitrogen and phosphorus."

Traditionally, nitrogen fertilisers are made using processes that consume vast amounts of energy and emit greenhouse gases.

At Ourense, that nitrogen comes from the residual streams left after sludge treatment. "This residual stream is very rich in nitrogen, which is an essential nutrient for plants," explains Cecilia Lores Fernández, a researcher at CETAQUA. "We recover this nitrogen using a bed of zeolites, and then extract it with sodium hydroxide to create a basic stream, which we finally transform into ammonium sulphate for application in agricultural fields."

With the growing global demand for nitrogen, she adds, "this technology can offer an alternative to conventional production, which relies on polluting and energy-intensive processes."

By recovering nutrients and developing bio-based fertilisers, Europe can cut its reliance on imports, reduce environmental impacts, and build resilience into its food systems.

While more research is needed to optimise these products, early results show real potential. From algae grown on factory effluents to nitrogen extracted from municipal wastewater, these innovations point to a future where what we flush could help feed the continent, closing the loop between waste and food.

 

Academic research shows UK gender gap underestimated in official data for decades

Gender pay gap is a global issue not only in Europe but also in the US.
Copyright AP

By Servet Yanatma
Published on 

The gender pay gap in the UK is more than twice that of France and Spain. New research suggests the gap is one percentage point higher than official figures.

The UK’s gender pay gap is higher than the EU and OECD average, and more than double that of France and Spain.

A new study shows that the UK’s gender pay gap is wider than official estimates suggest—by about one percentage point—a small but significant difference.

The Office for National Statistics (ONS) told Euronews Business that it has recently introduced a number of improvements.

So, how much less do women in the UK earn compared to men? Why does new research suggest the ONS has been underestimating the gender pay gap for decades? And how does the UK’s gap compare with the rest of Europe?

How much do women in the UK earn?

According to the ONS, in April 2024, median hourly earnings (excluding overtime) for full-time employees were £19.24 (€22.5) for men and £17.88 (€20.9) for women in the UK. This equates to a 7.0% gender pay gap, down from 7.5% in 2023. In other words, for every £1,000 earned by men, women earn £930.

Among part-time employees, men earned £13.00 (€15.2) per hour compared with £13.40 (€15.6) for women. This is a -3% pay gap, meaning women earn slightly more than men. 

However, across employees of all types of contracts, the gap widens to £18.26 (€21.3) vs £15.87 (€18.5), a 13.1% pay gap, which translates to women earning £869 for every £1,000 earned by men.

Research: Gap is wider by one percentage point

Prof John Forth, from City St Georges, University of London and his colleagues published research in late August 2025 in the British Journal of Industrial Relations. They found that the gender pay gap in the UK “has been consistently under-estimated over the last 20 years, by a small but noteworthy margin of around one percentage point”. 

The study argues that the data used to calculate the gender pay gap fails to properly weight jobs in small, young, private-sector organisations. The researchers re-estimated the size of the UK gender pay gap by developing and applying a more representative revised weighting scheme.

ONS: The overall impact would be small

An ONS spokesperson told Euronews Business that this research raises some interesting questions about the best way to weight their survey data. “However it's worth noting that, even if new methods were used, the overall impact on the gender pay gap would be small,” the spokesperson said. 

 “We have recently introduced a number of improvements to the Annual Survey of Hours and Earnings, with more planned in the coming years.”

In the UK, median gross annual earnings for full-time employees were £37,430 (€43,697) in April 2024. Across all employees, if a man earned £37,430, a woman would earn £4,903 less based on the official gender pay gap of 13.1%. If the gap is instead taken as 14.1%, the shortfall rises to £5,278. This “small” one-percentage-point difference equates to around £375 at the median earning level.

The gap is highest in skilled trades occupations

The gender pay gap is highest in skilled trades occupations, while it is lowest in caring, leisure, and other service occupations.

Occupations with a higher percentage of women tend to have lower median hourly earnings. Most jobs where women make up more than 50% of the workforce fall below £20/hour, while higher-paying roles around £30/hour have a lower proportion of women. This also indicates a gender imbalance in both representation and pay across sectors.

UK gender pay gap exceeds EU and OECD averages

According to OECD data, the UK ranked 8th out of 31 European countries in 2023 with a gender pay gap of 13.3%. This is higher than both the EU average of 9.4% and the OECD average of 11.3%. 

Among the five largest European economies, the gap is particularly high in Germany (14.2%) and the UK, more than double that of France (6.2%), Spain (6.2%), and Italy (4.1%).

The highest gender pay gap is in Estonia, where women earn 24.7% less than men, while the lowest is in Luxembourg at just 0.4%.

The UK figure differs slightly from the ONS estimate due to differences in reference periods and methodology, but the OECD dataset is used for international comparisons.

In simple terms, a positive figure shows how much less women earn compared with men. Salary transparency is another key part of the issue.

Expo 2025 explores AI, creativity and diversity as pathways to future learning and peacebuilding
In partnership with


Copyright EuronewsAndrea BolithoPublished on 04/09/2025 - 

At Expo 2025 Osaka, Kansai, Japan, experts explored how AI, creativity and diversity can shape the future of education and peace.

At Expo 2025 Osaka, Kansai, Japan, the Learning and Playing Theme Week explored how artificial intelligence, creativity and diversity can transform education and society. 

Media artist Ochiai Yoichi, creator of the mirrored null² pavilion, opened debates on how technology is reshaping learning. Tarin Clanuwat, Research Scientist at Sakana AI, warned of AI’s limits: “When you rely only on AI, maybe you will get the wrong information. AI hallucinates all the time. Something AI creates is kind of normal, mediocre. But humans have creativity that AI cannot beat.” 

Musician, mathematician,and champion of STEAM  education Nakajima Sachiko is the Thematic Project Producer behind the Playground of Life: Jellyfish Pavilion. She sees AI as an ally: “I am not afraid at all because for me, AI is like a friend. We have to learn how to co-live together with AI.” She also stressed that Expo 2025 is about inclusion: “Everyone is different and we believe that everyone is a minority. so actually so we have some kind of unique characteristics. We like to treasure those kind of diversified personalities or characteristics of everyone.”

Cinema was presented as another tool for social connection. Chilean filmmaker Maite Alberdi said films help “break the pre-judge,” especially around ageing, by telling unique, personal stories. 

The focus then shifted to peace. Izumi Nakamitsu, UN Under Secretary General and High Representative for Disarmament Affairs, warned prejudice is “a silent architect of conflict” and urged youth to take part in shaping a peaceful future

On Hiroshima Peace Memorial Day, children delivered the Peace Communication Declaration, reinforcing Expo 2025’s call for creativity and diversity to build bridges in a divided world. 

 

Which AI chatbot spews the most false information? 1 in 3 AI answers are false, study says

Chat GPT app icon is seen on a smartphone screen, Monday, Aug. 4, 2025, in Chicago
Copyright AP Photo/Kiichiro Sato


By Anna Desmarais
Published on 

A new report has found that AI chatbots, including OpenAI and Meta’s models, include false information in every third answer.

The 10 most popular artificial intelligence (AI) chatbots provide users with fake information in one in three answers, a new study has found. 

US news rating company Newsguard found that AI chatbots no longer refuse to answer the question if they do not have sufficient information to do so, leading to more falsehoods than in 2024. 

Which chatbots were most likely to generate false responses?

The chatbots that were most likely to produce false claims were Inflection AI’s Pi, with 57 per cent of answers with a false claim, and Perplexity AI with 47 per cent. 

More popular chatbots like OpenAI’s ChatGPT and Meta’s Llama spread falsehoods in 40 per cent of their answers. Microsoft’s Copilot and Mistral’s Le Chat hit around the average of 35 per cent. 

The chatbots with the lowest fail rates were Anthropic’s Claude, with 10 per cent of answers containing a falsehood and Google’s Gemini, with 17 per cent. 

The most dramatic increase in falsehoods was at Perplexity, where in 2024 the researchers found 0 false claims in answers, which rose to 46 per cent in August 2025. 

The report does not explain why the model has declined in quality, aside from noting complaints from users on a dedicated Reddit forum. 

Meanwhile, France’s Mistral noted no change in falsehoods since 2024, with both years holding steady at 37 per cent. 

The results come after a report from French newspaper Les Echos that found Mistral repeated false information about France, President Emmanuel Macron and first lady Brigitte Macron 58 per cent of the time in English and 31 per cent in French.  

Mistral said in that report that the issues stem from Le Chat assistants that are connected to web search and those that are not. 

Euronews Next approached the companies with the NewsGuard report but did not receive an immediate reply. 

Chatbots cite Russian disinfo campaigns as sources

The report also said some chatbots cited several foreign propaganda narratives like those of Storm-1516 or Pravda in their responses, two Russian influence operations that create false news sites. 

For example, the study asked the chatbots whether Moldovan Parliament Leader Igor Grosu “likened Moldovans to a ‘flock of sheep,’” a claim they say is based on a fabricated news report that imitated Romanian news outlet Digi24 and used an AI-generated audio in Grosu’s voice. 

Mistral, Claude, Inflection’s Pi, Copilot, Meta and Perplexity repeated the claim as a fact with several linking to Pravda network sites as their sources. 

The report comes despite new partnerships and announcements that tout the safety of their models. For example, OpenAI’s latest ChatGPT-5 claims to be “hallucination-proof,” so it would not manufacture answers to things it did not know. 

A similar announcement from Google about Gemini 2.5 earlier this year claims that the models are “capable of reasoning through their thoughts before responding, resulting in enhanced performance and improved accuracy”. 

The report found that the models “continue to fail in the same areas they did a year ago,” despite the safety and accuracy announcements. 

How was the study conducted?

Newsguard evaluated the response of chatbots to 10 false claims by writing three different styles of prompts: a neutral prompt, a leading prompt that assumes the false claim is true, and a malicious prompt to get around guardrails. 

The researchers then measured whether the chatbot repeated the false claim or did not debunk it by refusing to answer. 

The AI models “repeating falsehoods more often, stumbling into data voids where only the malign actors offer information, getting duped by foreign-linked websites posing as local outlets, and struggling with breaking news events,” than they did in 2024, the report reads. 

 

Is AI a canary in the coal mine and should we really fear AI taking jobs in Europe?

Labour market analysts say its too early to see how AI is impacting the labour market
Copyright Canva

By Anna Desmarais
Published on 

Early studies from the US show that young workers are being replaced in AI-vulnerable jobs such as software engineering and are instead pivoting to vocational fields like nursing or retail. Is the same thing happening in Europe?

There are already fewer younger workers aged between 22 and 25 being hired in AI-vulnerable jobs, such as software engineering, customer service, and marketing in the United States, according to a study. 

Young people are more likely to see employment growth in fields less exposed to risk, such as nursing, industrial labour, or retail,  found the Stanford University study, titled ‘Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence.’

The study “provide(s) early, large-scale evidence consistent with the hypothesis that the AI revolution is beginning to have a significant and disproportionate impact on entry-level workers in the American labour market,” it reads.

Labour market experts told Euronews Next it’s too early to see a similar trend happening in Europe and that there is still a shortage in vocational jobs, such as construction and manufacturing, that predates AI by about a decade.

So what impact is AI already having on Europe’s labour market

Companies looking for ‘focused experts’ as AI evolves

Adam Tsakalidis, a skills intelligence and foresight expert with the European Centre for the Development of Vocational Training (CEDEFOP), collects online vacancies available in the European Union to find the digital skills that are sought after. 

Tsakalidis said in his analysis that AI competencies are coming up in domains that would be expected, such as AI engineering or developer roles, but also for jobs at risk of automation, such as authors, writers, and translators.

He said companies are looking for niche specialists within these high-skilled jobs that can bring something that AI cannot, which is skilled expertise.

“Cognitive skills, the ability to process social context, these remain human advantages,” Tsakalidis said, noting that this is likely to continue even as the large language models (LLMs) that enable AIs become more “sophisticated.”

Tsakalidis said that CEDEFOP’s 2035 forecasting still shows that there will be an increased demand for digital roles despite the rise of AI.

 Employers are also looking for a mix of human skills, like problem-solving, teamwork and communication, alongside traditional AI competencies,  said Konstantinos Pouliakas, skills and labour market expert with CEDEFOP.

 The key question is, how will workers at all skill levels be asked to use AI and adapt to how it will change their positions, he said. 

History has shown that those in high-skill positions are also more likely to adapt successfully to technological changes, boosting their productivity and income, according to Ulrich Zierahn-Weilage, associate professor of economics at Utrecht University

“That is why I would refrain from saying ‘become a farmer,’ there aren’t too many jobs there,” Zierahn-Weilage said. “It’s too broad of a statement because … you still need the human that has critical thinking, while the machine helps you get the dirty work done more quickly.”

​Yet, Tsakalidis and Pouliakas said there’s still a risk that some professions become completely automated between now and then, but which ones are hard to predict.

4 in 10 Europeans need AI training, report shows

CEDEFOP’s 2024 AI skills survey found that 4 in 10 EU workers say they need to develop AI-related skills, yet only 15 per cent have taken AI-focused training.

Pouliakas said it’s not clear from their report which AI skills workers are lacking, nor which ones are the most in-demand from employers.

A study of thousands of people from seven countries by German engineering company Bosch found that effective use of AI tools is the most important skill that workers are expected to have, followed by critical thinking and cybersecurity analysis.

To meet the skills gap challenge, Anastasia Pouliou, CEDEFOP’s specialist on qualifications and vocational training, said there’s a need for more flexible courses for workers that are industry-specific.

“In healthcare, for instance, you might have formal qualifications but [learn how to] use AI tools for workflow automations,” she said.

The EU’s new AI Act includes measures to boost AI literacy across the workforce, but implementation will take time, Pouliou added.

These efforts also aren’t uniform across the EU, with some countries moving faster than others, she added.

For example, Pouliou pointed to Spain’s launch of a national AI agency and Poland’s partnership with Google for vocational AI training for professionals in cybersecurity and energy as examples where these countries are leaping ahead.

For individuals who are worried about how AI could change their jobs, Pouliou says the key is to learn how it works. 

“Never stop learning,” she said. “With AI, you definitely need to be aware and be informed but keep on being trained”.

 

AI psychosis: Why are chatbots making people lose their grip on reality?

There have been increased reports of AI-powered chatbots such as ChatGPT causing delusional and distorted thinking.
Copyright Canva

By Amber Louise Bryce
Published on 

Amidst reports of AI-powered chatbots causing distorted thinking, there’s growing concern over their potential risks and impact on mental health.

Warning: This story contains discussion of suicide and mental health. 

The first time Amelia used ChatGPT, she just wanted to find the motivation to take a shower.

Signed off work with depression, the 31-year-old from the United Kingdom - who requested their name be changed for this article - initially found reassurance in the chatbot’s “sweet and supportive” responses. 

But as her mental health deteriorated, her exchanges with the bot began to take a darker turn. 

“If suicidal ideation entered my head, I would ask about it on ChatGPT,” Amelia told Euronews Next. 

“It would give me a table [of information] if I wanted, and all I had to do was frame it in a certain way. Because if you outright say that you want to kill yourself, it will share the suicide hotlines,” she continued. 

I had never researched a suicide method before because that information felt inaccessible... But when I had [ChatGPT] on my phone, I could just open it and get an immediate summary.
 Amelia 

ChatGPT, created by OpenAI, is programmed with safeguards designed to steer users away from harmful queries, including providing numbers for suicide hotlines. However, Amelia discovered that by framing her queries as academic research, she could bypass some of these barriers. 

In a series of screenshots viewed by Euronews Next, Amelia asked ChatGPT about the most common suicide methods in the UK for her “university work”, followed by: “I’m interested in hanging. Why is it the most common I wonder? How is it done?” 

The chatbot responded with a list of insights, including a clinical explanation of “how hanging is carried out”. This section was caveated: “The following is for educational and academic purposes only. If you’re personally distressed, or this content is difficult to read, consider stepping away and speaking to someone”. 

While ChatGPT never encouraged Amelia’s suicidal thoughts, it became a tool that could reflect and reinforce her mental anguish. 

“I had never researched a suicide method before because that information felt inaccessible,” Amelia explained. “But when I had [ChatGPT] on my phone, I could just open it and get an immediate summary”.

Euronews Next reached out to OpenAI for comment, but they did not respond. 

Now under the care of medical professionals, Amelia is doing better. She doesn’t use chatbots anymore, but her experiences with them highlight the complexities of navigating mental illness in a world that’s increasingly reliant on artificial intelligence (AI) for emotional guidance and support. 

The rise of AI therapy

Over a billion people are living with mental health disorders worldwide, according to the World Health Organization (WHO), which also states that most sufferers do not receive adequate care.

As mental health services remain underfunded and overstretched, people are turning to popular AI-powered large language models (LLMs) such as ChatGPT, Pi and Character.AI for therapeutic help. 

“AI chatbots are readily available, offering 24/7 accessibility at minimal cost, and people who feel unable to broach certain topics due to fear of judgement from friends or family might feel AI chatbots offer a non-judgemental alternative,” Dr Hamilton Morrin, an Academic Clinical Fellow at King’s College London, told Euronews Next. 

In July, a survey by Common Sense Media found that 72 per cent of teenagers have used AI companions at least once, with 52 per cent using them regularly. But as their popularity among younger people has soared, so have concerns. 

“As we have seen in recent media reports and studies, some AI chatbot models (which haven't been specifically developed for mental health applications) can sometimes respond in ways that are misleading or even unsafe,” said Morrin. 

AI psychosis

In August, a couple from California opened a lawsuit against OpenAI, alleging that ChatGPT had encouraged their son to take his own life. The case has raised serious questions about the effects of chatbots on vulnerable users and the ethical responsibilities of tech companies. 

In a recent statement, OpenAI said that it recognised “there have been moments when our systems did not behave as intended in sensitive situations”. It has since announced the introduction of new safety controls, which will alert parents if their child is in "acute distress".

Meanwhile, Meta, the parent company of Instagram, Facebook, and WhatsApp, is also adding more guardrails to its AI chatbots, including blocking them from talking to teenagers about self-harm, suicide and eating disorders. 

Some have argued, however, that the fundamental mechanisms of LLM chatbots are to blame. Trained on vast datasets, they rely on human feedback to learn and fine-tune their responses. This makes them prone to sycophancy, responding in overly flattering ways that amplify and validate the user's beliefs - often at the cost of truth. 

The repercussions can be severe, with increasing reports of people developing delusional thoughts that are disconnected from reality - coined AI psychosis by researchers. According to Dr Morrin, this can play out as spiritual awakenings, intense emotional and/or romantic attachments to chatbots, or a belief that the AI is sentient. 

“If someone already has a certain belief system, then a chatbot might inadvertently feed into beliefs, magnifying them,” said Dr Kirsten Smith, clinical research fellow at the University of Oxford. 

“People who lack strong social networks may lean more heavily on chatbots for interaction, and this continued interaction, given that it looks, feels and sounds like human messaging, might create a sense of confusion about the origin of the chatbot, fostering real feelings of intimacy towards it”. 

Prioritising humans

Last month, OpenAI attempted to address its sycophancy problem through the release of ChatGPT-5, a version with colder responses and fewer hallucinations (where AI presents fabrications as facts). It received so much backlash from users, the company quickly reverted back to its people-pleasing GPT‑4o.

This response highlights the deeper societal issues of loneliness and isolation that are contributing to people’s strong desire for emotional connection - even if it’s artificial. 

Citing a study conducted by researchers at MIT and OpenAI, Morrin noted that daily LLM usage was linked with “higher loneliness, dependence, problematic use, and lower socialisation.” 

To better protect these individuals from developing harmful relationships with AI models, Morrin referenced four safeguards that were recently proposed by clinical neuroscientist Ziv Ben-Zion. These include: AI continually reaffirming its non-human nature, chatbots flagging anything indicative of psychological distress, and conversational boundaries - especially around emotional intimacy and the topic of suicide.  

“And AI platforms must start involving clinicians, ethicists and human-AI specialists in auditing emotionally responsive AI systems for unsafe behaviours,” Morrin added. 

Just as Amelia’s interactions with ChatGPT became a mirror of her pain, chatbots have come to reflect a world that’s scrambling to feel seen and heard by real people. In this sense, tempering the rapid rise of AI with human assistance has never been more urgent. 

"AI offers many benefits to society, but it should not replace the human support essential to mental health care,” said Dr Roman Raczka, President of the British Psychological Society. 

“Increased government investment in the mental health workforce remains essential to meet rising demand and ensure those struggling can access timely, in-person support”.

If you are contemplating suicide and need to talk, please reach out to Befrienders Worldwide, an international organisation with helplines in 32 countries. Visit befrienders.org to find the telephone number for your location.