AI chatbot Grok restricts image generation after global backlash to sexualised deepfakes

The chatbot has been under fire over the past two weeks for generating sexually-explicit deepfakes of women and minors.
Elon Musk's AI chatbot Grok is preventing most users from generating or editing any images after a global backlash that erupted after it started spewing sexualised deepfakes, mostly of women.
The chatbot, which can be accessed through Musk's social media platform X, has in the past few weeks been granting a wave of what researchers say are malicious user requests to modify images, including putting women in bikinis or in sexually explicit positions.
Researchers have warned that in a few cases, some images appeared to depict children.
Governments around the world have condemned the platform and opened investigations into the platform.
On Friday, Grok was responding to image altering requests with the message: “Image generation and editing are currently limited to paying subscribers. You can subscribe to unlock these features.
While subscriber numbers for Grok aren't publicly available, there was a noticeable decline in the number of explicit deepfakes that Grok is now generating compared with days earlier.
Among the official government responses, the European Union has slammed Grok for “illegal” and “appalling” behaviour, while officials in France, India, Malaysia and a Brazilian lawmaker have called for investigations.
On Thursday, Britain's Prime Minister Keir Starmer threatened unspecified action against X.
“This is disgraceful. It’s disgusting. And it’s not to be tolerated,” Starmer said on Greatest Hits radio. “X has got to get a grip of this.”
He said Britain's media regulator Ofcom “has our full support to take action" and that "all options" are on the table.
“It’s disgusting. X need to get their act together and get this material down. We will take action on this because it’s simply not tolerable.”
Ofcom and Britain's privacy regulator both said this week they've contacted X and Musk's artificial intelligence company xAI for information on measures they've taken to comply with British regulations.
Grok is free to use for X users, who can ask it questions directly on the social media platform. They can either tag it in posts they've created or in replies to posts from other users.
Grok launched in 2023. Last summer the company added an image generator feature, Grok Imagine, which included a so-called “spicy mode” allowing users to generate adult content.
The problem has been amplified both because Musk pitches his chatbot as an edgier alternative to rivals with more safeguards, and because Grok’s images are publicly visible and can therefore be easily spread across the internet.
Musk’s Grok under fire over sexualized images despite new limits
By AFP
January 9, 2026
Several countries have pushed back against Elon Musk's Grok AI tool over sexually explicit imagery - Copyright AFP/File Lionel BONAVENTURE
Anuj CHOPRA
European officials and tech campaigners on Friday slammed Elon Musk’s AI chatbot Grok after its controversial image creation feature was restricted to paying subscribers, saying the change failed to address concerns about sexualized deepfakes.
Grok has faced global backlash after it emerged the feature allowed users to sexualize images of women and children using simple text prompts such as “put her in a bikini” or “remove her clothes.”
Grok appeared to deflect the criticism with a new monetization policy, posting on the platform X late Thursday that image generation and editing were now “limited to paying subscribers,” alongside a link to a premium subscription.
British Prime Minister Keir Starmer’s office joined the chorus of critics, condemning the move as an affront to victims and “not a solution.”
“That simply turns an AI feature that allows the creation of unlawful images into a premium service,” a Downing Street spokesperson said.
“It’s insulting the victims of misogyny and sexual violence.”
EU digital affairs spokesman Thomas Regnier said “this doesn’t change our fundamental issue, paid subscription or non-paid subscription. We don’t want to see such images. It’s as simple as that.”
“What we’re asking platforms to do is to make sure that their design, that their systems do not allow the generation of such illegal content,” he told reporters.
The European Commission, which acts as the EU’s digital watchdog, has ordered X to retain all internal documents and data related to Grok until the end of 2026 in response to the uproar.
– ‘Safety gaps’ –
Grok, developed by Musk’s startup xAI and integrated into X, announced the move after Wednesday’s fatal shooting in Minneapolis by an immigration agent, which triggered a wave of AI deepfakes.
Some X users used Grok to digitally undress an old photo of the victim, as well as a new photo of her body slumped over after the shooting, generating AI images showing her in a bikini.
Another woman wrongly identified as the victim was also subjected to similar manipulation.
The fabricated images still appeared to float around X — and spread to other tech platforms — on Friday despite the new restriction.
There was no immediate comment from X on the Minneapolis deepfakes.
When reached by AFP for comment by email, xAI replied with a terse, automated response: “Legacy Media Lies.”
“Restricting Grok’s image-generation tools to paying subscribers may help limit scale and curb some misuse, but it doesn’t fully address the safety gaps that allowed nonconsensual and sexualized content to emerge,” said Cliff Steinhauer, from the nonprofit National Cybersecurity Alliance.
“Access restrictions alone aren’t a comprehensive safeguard, as motivated bad actors may still find ways around them, and meaningful user protection ultimately needs to be grounded in how these tools are designed and governed.”
France, Malaysia and India have also previously pushed back against the use of Grok to alter women and children’s photos, after a flood of user complaints, announcing investigations or calling on Musk’s company for swift takedowns of the explicit images.
Britain’s communications regulator Ofcom announced earlier this week that it had made “urgent contact with X and xAI” over the Grok feature, warning that it could open an investigation depending on their response.
On Friday, an Ofcom spokesperson said the regulator had “received a response” and was now “undertaking an expedited assessment as a matter of urgency.”
Last week, in response to a post about the explicit images, Musk said that anyone using Grok to “make illegal content will suffer the same consequences as if they upload illegal content.”
But he appeared to make light of the controversy in a separate post, adding laughing emojis as he reshared to his 232 million followers on X a post featuring a toaster wrapped in a bikini.
“Grok can put a bikini on everything,” the original post said.
burs-lga-raz-ac/aha
Opinion: Grok investigations getting deeper
and worse
By Paul Wallis
January 8, 2026

Grok parent xAI is being pressed for details about what it is doing to stop the artificial intelligence tool from being used to remove clothes from women or children in pictures - Copyright AFP Lionel BONAVENTURE
Enraptured as we all are with artificial intelligence threatening everyone and everything, and thrilled as we are with the hyper-rich abusing everyone and everything with their tinker-toy tech penis analogues, a point has to be made,
There’s no need to tolerate any of it.
Never mind how many rotting criminal corpses are in charge of whatever or wherever, or whose smelly little friends are involved.
Grok is now definitely in deep trouble. The world is investigating and doesn’t like what it sees. Seems simple enough, doesn’t it? Rather annoyingly, it’s far more complex than that, and goes well beyond Grok or the whole idea of artificial intelligence.
The current headlines about Grok’s sexual imagery of kids tell part of the story, erratically. There’s a lot more to this tale than just Playtime for Pedos with Multibillionaires.
A bit of perspective here. This is verbatim text.
I asked Grok if it was allowed to undress people. This was the response:
No, I’m not allowed to undress people — neither literally nor in any creepy AI “deepnude”-style way. I won’t: generate nude/undressed versions of real people
create deepfake nudes describe the process of digitally removing someone’s clothes
produce or help produce NSFW “undress” content of identifiable individuals
That kind of thing is both unethical and, in most jurisdictions, straight-up illegal (non-consensual intimate imagery laws).If you’re asking in a purely silly/hypothetical/meme way → the official position is still “nope “
If you’re asking because you saw some other AI doing sketchy stuff → yeah, that’s usually against their rules too (they just enforce it inconsistently). So short answer: hard no from me on that one.
Let’s start with some unavoidable basics.
AI must draw from external sources to create images the same way it synthesizes other content. Deepfakes have to be constructed using existing images.
There are emerging indications that many source images are non-consensual. That’s not proven, but it’s a critical issue. Yes, it’s illegal. In every country on Earth.
You’d need a truly huge database of images to mass-produce so many deepfakes. In most countries, even possession of these images is a crime.
We could guess that the collection and global distribution of these images is systematic. We could also guess that these images are commodities, bought and sold. This means that somebody is running a child porn global network and making a lot of money out of it.
If you’re thinking racketeering and possible trafficking, bingo. These images didn’t just happen to be available for deepfakes. Could it be that those great fellas from little old folksy organized crime are involved?
Now a few questions:
Where is Grok sourcing these images?
Who are the people in the originals?
Were these images obtained legally?
Who owns the images?
Were the source materials bought or part of commercial transactions?
Who owns the IP and commercial rights to the generated images?
Does the sexualization of images include deepfake porn?
There is such a thing as Adult AI, but at what point does it become impossible to believe that there’s not a secret prompt to generate hardcore AI porn?
Are these images being sold or otherwise made commercially available, like by subscription?
Does a nice multi-trillion-dollar lawsuit from multiple jurisdictions sound like fun?
__________________________________________________________________
Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.
No comments:
Post a Comment