Thursday, January 15, 2026

 

X bans sexually explicit Grok deepfakes – but is its clash with the EU over?

The X landing page.
Copyright Rick Rycroft/Copyright 2023 The AP. All rights reserved.

By Romane Armangau
Published on 

While Elon Musk’s company has said it is taking steps to prevent its AI chatbot from creating nude images of real people, the European Commission has yet to be reassured.

Amid mounting pressure in Europe and abroad, Elon Musk’s social media platform has announced that it is implementing "technological measures to prevent its AI tool, Grok, from allowing the editing of images of real people in revealing clothing such as bikinis", a restriction that will apply to all users, including paid subscribers.

Grok's image editing function had been used by some users to virtually undress pictures of real women and underage girls. The situation, described as "appalling" and "disgusting" by the European Commission, prompted the EU executive to launch a request for information and a document retention order addressed to X.

Speaking through one of its spokespersons, the European Commission said it had taken note of the changes to Grok’s functionality, but warned that it would remain vigilant.

"We will carefully assess these changes to make sure they effectively protect citizens in the EU," the spokesperson said, adding that "should these changes not be effective, the Commission will not hesitate to use the full enforcement toolbox of the Digital Services Act."

If found guilty of breaching EU online platform rules under the Digital Services Act, the Commission could fine X as much as 6% of its global annual turnover.

Last month, the European Commission already fined Elon Musk’s social network €120 million over its account verification tick marks and advertising practices.

Investigations into the platform chatbot are currently ongoing in France, the United Kingdom and Germany, as well as in Australia. Grok has been banned altogether in Indonesia and Malaysia.



World-first tool reduces harmful engagement with AI-generated explicit images





University College Cork

World-first tool reduces harmful engagement with AI-generated explicit images 

image: 

World-first tool reduces harmful engagement with AI-generated explicit images. Pictured left to right are UCC School of Applied Psychology researchers Dr Conor Linehan; John Twomey, lead researcher of Deepfakes/Real Harms; and Dr Gillian Murphy. 

view more 

Credit: Image: University College Cork






  • World’s first research-backed intervention reduces harmful engagement with AI-generated explicit imagery.
  • As the Grok AI-undressing controversy grows, researchers say user education must complement regulation and legislation.
  • Study links belief in deepfake pornography myths to higher risk of engagement with non-consensual AI imagery.

 

Friday, 16 January 2026: A new evidence-based online educational tool aims to curb the watching, sharing, and creation of AI-generated explicit imagery.

Developed by researchers at University College Cork (UCC), the free 10-minute intervention Deepfakes/Real Harms is designed to reduce users’ willingness to engage with harmful uses of deepfake technology, including non-consensual explicit content.

In the wake of the ongoing Grok AI-undressing controversy, pressure is mounting on platforms, regulators, and lawmakers to confront the rapid spread of these tools. UCC researchers say educating internet users to discourage engagement with AI-generated sexual exploitation must also be a central part of the response.

False myths drive participation in non-consensual AI imagery

UCC researchers found that people’s engagement with non-consensual synthetic intimate imagery, often and mistakenly referred to as “deepfake pornography”, is associated with belief in six myths about deepfakes. These include myths such as the belief that the images are only harmful if viewers think they are real, or that public figures are legitimate targets for this kind of abuse.

The researchers found that completing the free, online 10-minute intervention, which encourages reflection and empathy with victims of AI imagery abuse, significantly reduced belief in common deepfake myths and, crucially, lowered users’ intentions to engage with harmful uses of deepfake technology.

Using empathy to combat AI imagery abuse at its source

The intervention has been tested with more than two thousand international participants of varied ages, genders, and levels of digital literacy, with effects evident immediately at a follow-up weeks later.

The intervention tool, called Deepfakes/Real Harms, is now freely available at https://www.ucc.ie/en/deepfake-real-harms/.

Lead researcher John Twomey, UCC School of Applied Psychology, said: “There is a tendency to anthropomorphise AI technology – blaming Grok for creating explicit images and even running headlines claiming Grok “apologised” afterwards. But human users are the ones deciding to harass and defame people in this manner. Our findings suggest that educating individuals about the harms of AI identity manipulation can help to stop this problem at source.”

Dr Gillian Murphy, UCC School of Applied Psychology and research project Principal Investigator, said: “Referring to this material as ‘deepfake pornography’ is misleading. The word ‘pornography’ generally refers to an industry where participation is consensual. In these cases, there is no consent at all. What we are seeing is the creation and circulation of non-consensual synthetic intimate imagery, and that distinction matters because it captures the real and lasting harm experienced by victims of all ages around the world.”

“This toolkit does not relieve platforms and regulators of their responsibilities in tackling this appalling abuse, but we believe it can be part of a multi-pronged approach. All of us – internet users, parents, teachers, friends and bystanders – can benefit from a more empathetic understanding of non-consensual synthetic imagery,” Dr Murphy said.

Dr Conor Linehan, UCC School of Applied Psychology, said: “With this project, we are building on our previous work in the area of responsible software innovation. We propose a model of responsibility that empowers all stakeholders, from platforms to regulators to end users, to recognise their power and take all available action to minimize harms caused by emerging technologies.”

Reducing intentions to engage in harmful deepfake behaviours

Feedback from those who have completed the intervention includes:

“I think it was very useful to show that deepfakes can be damaging even if people know they aren't real.  Too much of the deepfake discourse focuses on people being unable to tell them apart from reality when that's only part of the issue.”

“What stood out as good about this is that it didn’t come across as judgmental or preachy—it was more like a pause button. It gave space to think about the human side of the issue without making anyone feel attacked. … Instead of just pointing fingers, it gave you a chance to reflect and maybe even empathize a little, which can make the message stick longer than just being told, ‘Don’t do this’.”

Deepfakes/Real Harms is launched as part of UCC Futures - Artificial Intelligence and Data Analytics.

Professor Barry O’Sullivan, Director of UCC Futures - Artificial Intelligence and Data Analytics and member of the Irish Government’s Artificial Intelligence Advisory Council, said: “As we work towards a future of living responsibly with artificial intelligence, there is an urgent need to improve AI literacy across society. As my colleagues at UCC have demonstrated with this project, this approach can reduce abuse perpetration and combat the stigma faced by victims.”

This project is funded by Lero, the Research Ireland Centre for Software.

ENDS

No comments: