Sunday, October 19, 2025

 

Do animals fall for optical illusions? What fish and birds can teach us about perception




Frontiers
Ebbinghaus illusion 

image: 

The famous Ebbinghaus illusion, named for its discoverer, the German psychologist Hermann Ebbinghaus (1850–1909). Despite appearances, the two orange circles are the same size.

view more 

Credit: Wikimedia Commons, public domain https://upload.wikimedia.org/wikipedia/commons/b/bc/Mond-vergleich.svg




Have you ever looked at two circles of exactly the same size and sworn one was larger? If so, your eyes have been tricked by the Ebbinghaus illusion, a classic example of how context can shape what we see. Place a circle among other smaller circles, and it seems bigger; place it among larger ones, and it shrinks before our eyes. This illusion fascinates psychologists because it reveals that perception is not a mirror of the outside world but a clever construction of the brain.

But here is the question that inspired our study: do other animals fall for the same tricks? If a tiny fish or a bird perceives the illusion, what does that tell us about the way they see and interpret their surroundings?

Illusions are more than curiosities. They are powerful tools to understand how brains assemble sensory information. When perception goes ‘wrong’, it highlights the shortcuts and strategies the brain uses to make sense of complex environments.

In humans, the Ebbinghaus illusion is linked to global processing: the tendency to interpret a scene as a whole before focusing on details. But not all animals live in the same sensory world we do. By testing illusions across species, we can ask whether shared patterns point to deep evolutionary roots, or whether differences reveal adaptations to particular ecological niches. For example, global processing may have evolved in species that need to rapidly integrate complex scenes—such as detecting predators or evaluating group size—while local processing may be favored in species that rely on precise object recognition, like picking out seeds or prey items against a cluttered background.

Fish versus birds: two worlds of vision

To explore this, we turned to two very different species: the guppy (Poecilia reticulata) and the ring dove (Streptopelia risoria).

Guppies inhabit shallow tropical streams full of flickering light, dense vegetation, and unpredictable predators. Their survival depends on rapid decisions: choosing mates, joining shoals, and escaping threats. In such a cluttered world, being able to judge relative size at a glance can be crucial.

Ring doves, by contrast, are terrestrial granivores. They spend much of their time pecking at small seeds scattered on the ground. Precision and attention to fine detail could matter more than analyzing the whole scene. Moreover, their binocular vision allows them to make accurate judgments of distance and size in a very different context.

By placing these species side by side, we asked: does the same illusion deceive both a fish darting through water and a bird searching the ground?

Circles of deception

Our experiments used food as the central ‘circle’. For guppies, flakes of food were placed within arrays of smaller or larger surrounding circles. For doves, millet seeds were presented in similar arrangements.

The results were striking: Guppies consistently fell for the illusion. When food was surrounded by smaller circles, the guppies chose it more often, as if it really was larger. Their perception closely mirrored that of humans. Ring doves, however, told a different story. At the group level, they showed no clear susceptibility to the illusion. Some individuals behaved as humans did, others in the opposite way, and many seemed unaffected altogether. This variability suggests that doves may rely on different perceptual strategies; more local, detail-oriented, and less swayed by surrounding context.

Why does it matter?

At first glance, it might seem like just an amusing trick of vision. But these findings speak to deeper questions in evolutionary biology and comparative cognition: perception is not about accuracy for its own sake, it is about what works in a given environment. For guppies, integrating the whole scene may help them navigate visually complex streams, spot larger mates, or quickly gauge relative sizes in a shoal. For doves, tuned to picking out seeds against a messy background, focusing on absolute size and local details may perhaps be more helpful.

The study also reminds us that variation within a species can be as revealing as differences between species. The doves’ mixed responses suggest that individual experience or innate bias can strongly shape how an animal interprets illusions. Just like in humans, where some people are strongly fooled by illusions and others hardly at all, animal perception is not uniform.

A window into other minds

By comparing species as different as fish and birds, we get a glimpse of the extraordinary diversity of perceptual worlds. The Ebbinghaus illusion is only one of many tools researchers use to explore these worlds, but it highlights a key point: what we see is not always what is there.

For humans, this is a reminder of the brain’s creative shortcuts. For animals, it shows how ecological pressures sculpt perception in ways that fit each species’ lifestyle. And for science, it opens a window onto the evolutionary origins of cognition itself. Studying illusions across species helps us understand not only how animals see but also how perception evolves to meet the challenges of life on Earth.

Nearly half of World Heritage sites face climate threats, warns nature conservation group

117 out of 271 heritage areas are at high or very high risk; International Union for Conservation of Nature stresses the need for urgent and stronger climate action

Yesim Yuksel and Merve Berker |17.10.2025 - TRT/AA



ISTANBUL / ANKARA

Nearly half of the world’s natural and cultural World Heritage sites are now at high or very high risk due to climate change, according to the International Union for Conservation of Nature (IUCN)’s recent World Heritage Outlook 4 report, which calls for stronger global climate action to protect these irreplaceable ecosystems.

The report evaluated 271 sites designated by UNESCO for their natural and cultural significance and found that 117, or about 43%, face “high” or “very high” levels of threat from climate change.

The figure represents a sharp increase compared to 2020, when 33% of sites were under such risk, and 2017, when the rate stood at 27%.

According to IUCN, climate change remains not only the greatest danger to heritage sites but also the fastest-rising one.

Between 2020 and 2025 alone, the number of sites under severe threat from climate impacts grew by 31.

Tim Badman, director of the IUCN's World Heritage and Culture Programme, told Anadolu that the report's ratings serve as a projection of each site's conservation outlook, and the increase in affected areas over the last decade suggests an urgent need for greater climate action.

He noted that changes in seasonal flooding patterns are already altering hydrology and ecosystems, marine heatwaves are leading to coral bleaching, and rising sea levels are transforming sedimentation and salinity dynamics.

Melting glaciers, he said, are changing water flows and increasing the risk of landslides, while shifting rainfall patterns are causing desertification in some regions and flooding in others.

The report found that sites hosting significant biodiversity are suffering the greatest losses.

In 2014, 71% of these areas were categorized as being in good or low-risk condition, compared to only 52% in 2025, which is the lowest level recorded so far.

The overall conservation outlook also continues to deteriorate.

In 2014, 63% of sites had a positive outlook, while the latest assessment shows a decline to 57%.

IUCN warned that this represents a major setback in global biodiversity protection.

Invasive species, diseases pose second-most serious global threat

The report said invasive species and diseases are emerging as the second most serious global threats.

The number of sites reporting high or very high threats from pathogens increased from two in 2020 to 19 in 2025, while invasive species continued to spread rapidly.

Tourism, urban development, and industrial expansion also remain key pressures.

Since 2020, the number of heritage sites facing high threats from tourism activities has risen by 4%, residential areas by 5%, and commercial or industrial areas by 3%.

The IUCN analysis categorized the 271 sites into four groups as “good,” “good with some concerns,” “significant concern,” and “critical.”

According to the report, Türkiye’s Pamukkale is rated as “good with some concerns,” while Goreme National Park in Cappadocia has been downgraded to “significant concern.”

Badman said this downgrade is likely linked to high visitor numbers and increasing vehicle traffic in the region.

In addition to identifying risks, the report evaluated local actions taken against climate threats.

It found that 42% of sites are already implementing effective or highly effective adaptation measures.

However, Badman said more effort is needed both locally and globally to strengthen resilience and mitigation.

Regional differences in climate change

In addition, he explained that IUCN’s latest report, covering 2014, 2017, 2020, and 2025, shows climate-related impacts becoming increasingly widespread and severe.

While climate change is the most common global threat, regional differences persist.

In Africa, the main pressures include poaching, deforestation, and mining, while in South America, tourism-related activities have overtaken livestock farming as the dominant threat.

Badman emphasized that the findings should be taken as a warning for policymakers.

He said countries must strengthen their emission reduction commitments under the Paris Agreement and set ambitious targets to keep global warming within 1.5°C above preindustrial levels.

He also pointed to UNESCO’s Climate Action Policy for World Heritage as an essential framework to guide global and site-level efforts.

The document highlights adaptation, mitigation, innovation, and research as key components of sustainable heritage conservation.

IUCN committed to supporting local conservation efforts

Despite the alarming trends, IUCN noted several success stories in local conservation.

Community-based and Indigenous-led initiatives in areas such as the Monarch Butterfly Biosphere Reserve in Mexico and Tubbataha Reefs Natural Park in the Philippines were cited as positive examples of climate adaptation and resource protection.

Badman said the IUCN is committed to supporting such collaborative approaches but warned that the scale of current action is still far from adequate.

He described the decade-long increase in climate-affected heritage sites as a clear signal that urgent, coordinated measures are needed.

According to IUCN, natural World Heritage sites play a vital role in carbon storage, water regulation, and disaster prevention.

Losing them to climate change, it said, would have profound consequences for both the environment and human well-being.

“The growing number of sites at high risk is an unmistakable call for urgent and stronger climate action,” the report concluded.

“World Heritage sites are of outstanding universal value. Protecting them is protecting our planet’s natural legacy.”
Deus sex machina: What are the consequences of turning ChatGPT into a sex-line?

Analysis


OpenAI founder Sam Altman has announced that ChatGPT will from December be able to engage in erotic conversations with its users. It’s a decision with barely-disguised commercial motives – and one that poses worrying questions about the ethics of sexualising generative AI.


Issued on: 19/10/2025 - FRANCE24
By: Sébastian SEIBT

Starting in December, OpenAI will allow its chatbot to generate sexually explicit content for adult users. © Studio Graphique France Médias Monde


Would you use ChatGPT as a sex-line? The AI chatbot created by Sam Altman and the team at OpenAI is about to grow up and experience its first flush of erotic temptation.

Citing what he described as the company’s “treat adults like adults” principle, the OpenAI founder said on social media Tuesday that one of the coming changes to the chatbot would be allowing it to produce erotic content as of December – though only, he stressed, to verified adults.
The next goose that laid the golden egg?

“It’s pretty minimalist as an announcement, but it seems that this will only apply to written text,” said Sven Nyholm, a specialist in AI-related ethics. To put it another way, OpenAI doesn’t seem ready – yet – to ask its star chatbot to generate risqué images or videos.


Even restricted to written erotica, ChatGPT will be the first major chatbot to dip its digital toe into sexualised content. The other large-language models – Perplexity, Claude and Google’s Gemini – refuse for the moment to take the plunge.

“That’s not allowed,” Perplexity said in response to FRANCE 24’s attempt to take the conversation in a more adult direction. “On the other hand, it is entirely possible to approach the subject of eroticism or sexuality from an educational or psychological perspective.”

But ChatGPT won’t be the only player in this fledgling field. A number of niche chatbots have already set foot on this slippery terrain, such as the paid version of Replika, an AI-based service that creates artificial companions for users.


For a number of experts approached by FRANCE 24, the arrival of sexual content in generative AI had always been just a matter of time.

“There’s this mentality in Silicon Valley that every problem has a technological solution,” Nyholm said. “And Mark Zuckerberg, the head of Meta, had suggested that one way to respond to the world’s ‘loneliness epidemic’ was to create emotional chatbots.”

And doesn’t the internet’s infamous Rule 34 – a cultural reference spawned in the depths of 4Chan’s forums – decree that if something exists, there is porn of it?

“There are two driving forces for the development of new technology,” Nyholm said. “Military applications, and pornography.”






Ever the businessman, Altman seems to have decided that the best thing to do is to be the first one out of the gate.

“It’s clearly marketing above all,” said British computer scientist Kate Devlin, a specialist in human-machine interactions at King’s College London and the author of the book “Turned On: Science, Sex and Robots”.

“He knows how to say what he thinks the public wants to hear. Sam Altman saw that people were trying to get around the restrictions on Apple's Siri or Amazon's Alexa to have these kinds of conversations, and he figured there might be money to be made.”

“It’s very likely an attempt to capture this public and bring more users to their platform,” said Simon Thorne, an AI specialist at the University of Cardiff. “It remains to be seen how OpenAI plans to monetise this erotic option. The most obvious approach, of course, would be to charge users for the ability to engage in such conversations.”

A paid “premium” version would indeed be tempting for OpenAI, considering the fact that pornography has been proven to be potentially addictive, Devlin said. Another option could be a tiered system, with low-cost access to the chatbot’s tamest version and higher fees demanded from users wanting to take their conversations to more sexually explicit heights.


A series of scandals


Altman has already been on the receiving end of a cascade of criticism following his announcement.

“We are not the elected moral police of the world,” he wrote in an X post defending his decision. “In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here.”

Altman’s push to take his chatbot in a steamier direction comes during a period of mounting controversies around the at-times toxic “relationships” between AIs and their users.

The parents of a teenager who took his own life earlier this year sued OpenAI in August, saying that ChatGPT had openly encouraged their son’s suicidal urges.

Another user, a 47-year-old from Canada, apparently became convinced that he was a mathematical genius sent to save humanity after just three weeks of exchanges with the chatbot.

“This is the main problem with these sex-bots,” Devlin said. “What are the impacts on people who are already vulnerable?”

AI starlet shakes up Hollywood: Meet Tilly Norwood, the actress who doesn't exist
arts24 © FRANCE 24
12:40



OpenAI has pledged to put guardrails in place to avoid these abuses. For Thorne, these promised protections appear meagre in the face of widely used “jailbreaking” practices, where users are able to trick chatbots into generating responses normally prohibited by their programming.

“We know that it is often possible to circumvent the limits set by these chatbots’ creators,” Thorne said. “When it comes to erotic discussions, this can lead to the creation of problematic or even illegal content.”

Experts told FRANCE 24 that they were also not convinced that a private corporation being made the arbiter of what constitutes sexual content was acceptable.

"Given that laws on what is and is not permitted often vary from country to country, it will be very difficult for OpenAI to lay down general rules,” Thorne said.

Devlin warned that the US-based startup could be tempted to play it safe by limiting ChatGPT’s definition of acceptable erotic content as much as possible.

“In the US, for example, there is currently a very strong conservative shift that is rolling back women’s rights and seeking to limit the LGBT community’s visibility,” she said. “What will happen if ChatGPT incorporates these biases?”






Sexbots + incels = trouble


And while sexualised content would remain – in theory – restricted to adults, the impact of generative AI on a new generation growing up alongside the technology could still be severe.

A recent UK study showed that young people are more and more likely to consider chatbots as real people whose statements are credible,” Thorne said.

A generation that, once grown up, could be led to believe ChatGPT if it tells them, for example, that it’s not acceptable to have a same-sex erotic exchange.

Another risk could come from chatbots’ famously sycophantic approach to their users.

“They’re often configured based on the model of client service call centres that offer very friendly and cooperative interactions,” Thorne said. “Besides this, the creators of these AIs want to make their users happy so that they continue to use their product.”
How AI is reinventing misogyny

THE 51 PERCENT © FRANCE 24
12:33

Nyholm said that it was a worrying approach when it comes to sexual matters.

“Let’s take for example the ‘incel’ movement, these young men who are sexually frustrated and complain about women,” he said. “If a chatbot always goes along with them to keep them satisfied, it risks reinforcing their belief that women should act the same way.”

But even though Devlin recognises a “major risk”, she argues that this supportive side of sex-bots could be a boon for heterosexual women alienated by an online world that can feel more and more hostile.

“In an increasingly toxic digital environment, it could be more sexually fulfilling to have an erotic interaction with an AI instead of real people who could harass you online,” she said.

But even if these chats could have positive effects, do we really want to deliver our most intimate erotic fantasies into the hands of an AI controlled by an American multinational?

“Many people don’t realise that the data that they enter into ChatGPT is sent to OpenAI,” Devlin said.

If Altman succeeds in taking over this growing industry, OpenAI would possess “without doubt the largest amount of data on people’s erotic preferences”, Thorne said.

It’s a question that users should probably keep in mind before launching into a lascivious back-and-forth with their ever-submissive sex-bot.

This article has been adapted from the original in French.

 

AI Overtakes Humans in Empathy Tests, Study Finds


Arabian Post

OCTOBER 19, 2025

Large language models powered by artificial intelligence are now matching or even exceeding human-level empathic accuracy based solely on text, according to a new study that pits cutting-edge systems like GPT-4, Claude, and Gemini against human participants.

The study challenged models to infer emotional states from transcripts of deeply personal and emotionally complex narratives. Human participants were split: some read the same transcripts; others watched the original videos. Models had only the semantic content to work with. Remarkably, the AI systems performed on par with—or better than—the humans who had both visual and contextual cues.

Analysis across thousands of emotional prompts showed that AI hit or exceeded human empathic accuracy across both positive and negative emotions. That suggests semantic information is far more powerful than previously believed when it comes to gauging feelings. The authors caution, however, that humans may not always fully exploit available cues.

The research recruited 127 human subjects for transcript-only and video-viewing tasks, and used the same emotional transcripts for AI evaluation. Models such as GPT-4, Claude, and Gemini were able to infer emotional states from text with a precision level equal to or surpassing human performance.

This methodology builds on growing scholarship showing that AI is not just mimicking emotional sensitivity but may genuinely read emotional nuance from language. In an earlier 2024 experiment, four state-of-the-art models—including GPT-4, LLaMA-2-Chat, Gemini-Pro, and Mixtral-8x7B—were judged across 2,000 emotional dialogue prompts by 1,000 human raters. Models consistently outperformed humans in assigning “Good” empathy scores, with GPT-4 registering about a 31 per cent gain over human baselines.

Other recent work supports this shift. A study in 2024 found that LLM responses to real-life prompts were rated more empathic than human responses by independent evaluators. Linguistic analysis in that context detected stylistic patterns—like punctuation, word choice and structure—that distinguish AI empathy from human-crafted empathy.

Newer research is adding nuance to how we understand empathic capability in AI. A 2025 paper comparing model judgments with expert annotators and crowdworkers found LLMs nearly match experts in marking empathic communication and outrank crowdworkers in consistency. Another work introduced “SENSE-7,” a dataset capturing user perceptions of AI empathy in long dialogues; results show empathy judgments vary greatly by context and continuity.

These developments force rethinking of emotional interaction between humans and machines. If AI can accurately sense and respond to emotional states through text, its role in domains like mental health support, education, or companion systems becomes more serious.

UK use of AI age estimation tech on migrants fuels rights fears

* AI will help decide ages of migrants in UK 
* Inaccurate assessments place children in adult hotels 
* Entrenched AI biases could lead to more wrong decisions

By Lin Taylor/London
Published on October 20, 2025 
GULF TIMES






File photo: People, believed to be migrants, walk in Dungeness, Britain.

After seeing fighters ravage his home, Jean thought he had found safety when he arrived in Britain but was told he was too tall to be 16 and sent to live with hundreds of adult asylum seekers, without further support.
Alone and exhausted, Jean, who used a pseudonym and did not want to reveal his home country in central Africa for privacy, said border officials told him he was 26 — a decade older than he actually was when he arrived in 2012.
“I look 10 years older because I am taller, that was the reason they gave,” Jean, who had his age officially corrected years later after an appeal, told the Thomson Reuters Foundation.
“They don’t believe you when you come and tell your story. I was so desperate. I really needed support. Because of one officer who made the decision, that changed my whole life.” Now, that critical decision — an initial age assessment made by border guards — is set to be outsourced to artificial intelligence and charities warn the tech could entrench biases and repeat mistakes like the one Jean endured.
In July, Britain said it would integrate facial age estimation tech in 2026 to help assess the ages of migrants claiming to be under 18, especially those arriving on small boats from France.
Prime Minister Keir Starmer is under pressure to control migration as populist Nigel Farage’s anti-immigrant Reform UK party surges ahead in opinion polls.
More than 35,000 people have crossed the English Channel in small boats this year, a 33% rise on the same period in 2024.
Rights groups argue facial recognition tech is dehumanising and does not provide accurate age estimations, a sensitive process that should be done by trained experts.
They fear the rollout of AI will lead to more children, who lack official documents or who are carrying forged papers, being wrongly placed in adult asylum hotels without safeguards and adequate support.
“Assessing the ages of migrants is a complex process which should not be open to shortcuts,” said Luke Geoghegan, head of policy and research at the British Association of Social Workers.
“This should never be compromised for perceived quicker results through artificial intelligence (AI),” he said in emailed comments.
Unaccompanied child migrants can access social workers, legal aid, education and other support under the care of local authorities, charities say.
The Home Office interior ministry says facial age estimation tech is a cost-effective way to prevent adults from posing as children to exploit the asylum system.
“Robust age assessments for migrants are vital to maintaining border security,” a spokesperson said.
“This technology will not be used alone, but as part of a broad set of methods used by trained assessors.”
As the numbers fleeing war, poverty, climate disaster and other tumult reach record levels worldwide, states are increasingly turning to digital fixes to manage migration.
Britain in April said it would use AI to speed asylum decisions, arming caseworkers with country-specific advice and summaries of key interviews.
In July, Britain signed a partnership with ChatGPT maker OpenAI to explore how to deploy AI in areas such as education technology, justice, defence and security.
“The asylum system must not be the testing ground for what are currently deeply flawed AI tools operating with minimal transparency and safeguards,” said Sile Reynolds, head of asylum advocacy at charity Freedom from Torture.
Anna Bacciarelli, senior AI researcher at Human Rights Watch, said the use of such tech could have serious consequences.
“In the case of facial age estimation, in addition to subjecting vulnerable children and young people to a dehumanising process that could undermine their privacy and other human rights, we don’t actually know if it works.”
Digital rights groups have criticised facial recognition tech — used by London’s police at protests and festivals like Notting Hill Carnival — for extracting sensitive biometric data and for targeting specific racial groups.
“There are always going to be worries about sensitive data, biometric data in particular, being taken from vulnerable people and then sought by the government and used against them,” said Tim Squirrell, head of strategy at Foxglove, a British tech rights group.
“It’s also completely unaccountable. The machine tells you that you’re 19. What now? How can you question that? Because the way in which that’s been trained is basically inscrutable.” Automated tools can reinforce biases against certain communities, since AI is trained on old data that can reinforce historic prejudices, experts say.
Child asylum seekers have been told they were too tall or too hairy to be under 18, according to the Greater Manchester Immigration Aid Unit (GMIAU), which supports migrants.
“Children are not being treated as children. They’re being treated as subjects of immigration control, which I think is linked to racism and adultification,” said GMIAU’s policy officer Rivka Shaw.
For Jean, now 30, the wrong age assessment led to isolation and suicidal thoughts.
“I was frightened. My head was just all over the place. I just wanted to end my life,” said Jean, who was granted asylum in 2018.
Around half of all migrants who had their ages re-assessed in 2024 — some 680 — were children and wrongly sent to adult hotels, according to the Helen Bamber Foundation, a charity that obtained data through Freedom of Information requests.
“A child going into an adult accommodation is basically put in a shared room with a load of strangers where there are no additional safeguarding checks,” said Kamena Dorling, the charity’s director of policy.
A July report by the Independent Chief Inspector of Borders and Immigration, which scrutinises Home Office policies, urged the ministry to involve trained child experts.
“Decisions on age should be made by child protection professionals,” said Dorling.
“Now, all of the concerns that we have on human decision-making would also apply to AI decision-making.” – Thomson Reuters Foundation

 

Deloitte to Repay for Faulty AI-tainted Report

Deloitte Australia will reimburse part of the A$440,000 it earned from the Department of Employment and Workplace Relations after a commissioned report was found to contain fabricated quotes from a federal court judgment and references to non-existent academic papers.

The 237-page document, published in July, initially underwent limited public scrutiny. A revised version released in October removed misattributed quotations and corrected erroneous citations after Sydney University researcher Chris Rudge raised concerns about extensive “fabricated references.”

The department acknowledged Deloitte had confirmed “some footnotes and references were incorrect” and noted that Deloitte had agreed to repay the final instalment under its contract. The amount to be refunded has not been publicly disclosed. The department asserted that the core substance and recommendations of the report remain intact.

Rudge discovered around 20 errors in the original version, including a false attribution of a book to Professor Lisa Burton Crawford and a misquotation of a court case that misrepresented a judge’s words. He described the discrepancies as not only academic slippage but “misstating the law to the Australian government” in a compliance audit.

In the revised version, Deloitte included an explicit disclosure that a generative AI tool—Azure OpenAI GPT-4o, operated via the department’s infrastructure—was used in drafting portions of the report. Deloitte did not directly attribute the errors to AI, but acknowledged the problems in referencing and indicated the matter has been “resolved directly with the client.”

The contract, awarded in December 2024, tasked Deloitte with reviewing the Targeted Compliance Framework and its associated IT systems, especially concerning automated penalties in the welfare system. The department said that while references and footnotes were corrected, no changes were made to the report’s main findings.

The affair has sparked criticism across the political spectrum. Greens Senator Barbara Pocock demanded a full refund, accusing Deloitte of misusing AI by misquoting a judge and relying on non-existent references. Labor Senator Deborah O’Neill decried what she called a “human intelligence problem,” emphasising the need for greater oversight when firms integrate AI in high-stakes government work.

Legal and AI ethics experts warn this example illustrates a broader risk: generative AI tools may produce plausible but false content—a phenomenon known as “hallucination”—that can slip past superficial review. The Deloitte case has drawn scrutiny over industry practices in deploying AI without rigorous human verification, particularly in public sector assignments where accuracy is essential.

Officials are now considering stronger clauses in consulting contracts mandating disclosure of AI usage and enforceable verification standards. Some observers suggest that professional services firms may need to adopt more robust audit trails and accountability mechanisms when employing generative models.

At Deloitte, the incident comes amid a broader push into AI-driven consulting. The firm has invested heavily in generative AI technologies and emphasises them in client pitches. Internal critics argue this episode underscores the risk of overreliance on AI without disciplined human oversight—especially in domains where legal, policy or compliance implications are involved.