Monday, January 05, 2026

EU Commission examining concerns over childlike sexual images generated by Elon Musk’s Grok

Elon Musk attends the Choose France summit at Versailles, May 15, 2023.
Copyright Ludovic Marin/AP
By Romane Armangau
Published on 

A spokesperson for th European Commission said it was “very seriously looking into” the creation of sexually explicit images of girls – including minors – by Grok, the AI model integrated into X.

The European Commission has announced it is looking into cases of sexually suggestive and explicit images of young girls generated by Grok, the AI chatbot integrated into social media platform X, following the introduction of a paid feature known as “Spicy Mode” last summer.

“I can confirm from this podium that the Commission is also very seriously looking into this matter,” a Commission spokesperson told journalists in Brussels on Monday.

“This is not 'spicy'. This is illegal. This is appalling. This is disgusting. This has no place in Europe.”

On Sunday, in response to growing anger and alarm at the images, the social media platform said the images had been removed from the platform and that the users involved had been banned.

“We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” the X Safety account posted.

Similar investigations have been opened in FranceMalaysia and India.

The European Commission also referenced an episode last November in which Grok generated Holocaust denial content. The Commission said it had sent a request for information under the EU's Digital Services Act (DSA), and that it is now analysing the response.

In December, X was fined €120 million under the DSA over its handling of account verification check marks and its advertising policy.

“I think X is very well aware that we are very serious about DSA enforcement. They will remember the fine that they have received from us,” said the EU Commission spokesperson.



EU says ‘seriously looking’ into Musk’s Grok


AI over sexual deepfakes of minors


By AFP
January 5, 2026


Under fire: Elon Musk's xAI runs his AI tool Grok - Copyright AFP/File Lionel BONAVENTURE

The European Commission said Monday it is “very seriously looking” into complaints that Elon Musk’s AI tool Grok is being used to generate and disseminate sexually explicit childlike images.

“Grok is now offering a ‘spicy mode’ showing explicit sexual content with some output generated with childlike images. This is not spicy. This is illegal. This is appalling,” EU digital affairs spokesman Thomas Regnier told reporters.

“This has no place in Europe.”

Complaints of abuse began hitting Musk’s X social media platform, where Grok is available, after an “edit image” button for the generative artificial intelligence tool was rolled out in late December.

But Grok maker xAI, run by Musk, said earlier this month it was scrambling to fix flaws in its AI tool.

The public prosecutor’s office in Paris has also expanded an investigation into X to include new accusations that Grok was being used for generating and disseminating child pornography.

X has already been in the EU’s crosshairs.

Brussels in December slapped the platform with a 120-million-euro ($140-million) fine for violating the EU’s digital content rules on transparency in advertising and for its methods for ensuring users were verified and actual people.

X still remains under investigation under the EU’s Digital Services Act in a probe that began in December 2023.

The commission, which acts as the EU’s digital watchdog, has also demanded information from X about comments made around the Holocaust.

Regnier said X had responded to the commission’s request for information.

“I think X is very well aware that we’re very serious about DSA enforcement, they will remember the fine that they have received from us back in December. So we encourage all companies to be compliant because the commission is serious about enforcement,” he added.

No comments: