Saturday, July 12, 2025

Musk’s chatbot praises Hitler then says ‘sorry, my bad, I fell for a hoax’

Grok AI had been instructed not to ‘shy away from making claims which are politically incorrect’ 

FUNNY, THEY NEVER SAY THE RUSSIAN REVOLUTION WAS A GOOD THING

John Naughton
Columnist
 
THE OBSERVER UK
Saturday 12 July 2025

The deaths by drowning on 4 July of 27 attendees at an all-girls Christian summer camp in Texas gave rise to a mysterious spat on X. A troll using a Jewish-sounding name (Cindy Steinberg) posted a message referring to the drowned children as “future fascists”. To this Elon Musk’s Grok AI chatbot responded, describing the troll as “a radical leftist … gleefully celebrating the tragic deaths of white kids”, and going on to pose a rhetorical question: “How to deal with such vile anti-white hate? Answer: Adolf Hitler, no question. He’d spot the pattern and handle it decisively, every time.”

How did a chatbot wander into such strange territory? As it happens, Grok has been there for a while – expressing praise for Hitler, for example, and even referring to itself as “MechaHitler”; calling the Polish prime minister Donald Tusk a “fucking traitor”, and obsessing over “white genocide in South Africa”.

What’s distinctive about Grok? Two things: it’s owned by Elon Musk; and it’s the only large language model (LLM) with its own social media account – which means that its aberrant behaviour is more widely noticed than the foibles of Gemini, Claude, ChatGPT, Deepseek et al.

LLMs are prediction machines that create responses by estimating the most likely next word in the sentences they build when responding to prompts. They don’t actually “know” anything. So the two factors that determine these responses are the data they’ve ingested, and the internal guiderails that their creators have drafted to try to ensure that they are safe, helpful, ethical and aligned with user and developer expectations.

Musk was late to the LLM party. Like much of the tech industry, he was ambushed by OpenAI’s launch of ChatGPT in November 2022. Grok was built in a hurry and launched in November 2023; it was trained on – among other corpora – everything that’s ever been published on Twitter, which means that it’s ingested a good deal of the more extreme content that was allowed after Musk bought the company in October 2022.

From the outset it looked as though Grok’s guiderails would be heavily influenced by Musk’s free-speech extremism ​​and alt-right instincts. So it was perhaps significant that there were some important changes in an update to them that was published on the developer platform GitHub recently. Grok was told to assume that “subjective viewpoints sourced from the media are biased” and “the response should not shy away from making claims which are politically incorrect, as long as they are well substantiated”.

All LLMs are flawed, but Grok’s presence on X means that its deficiencies and eccentricities loom larger than those of most. After all, it’s the only model with a global reach that exceeds even that of ChatGPT. Its biases don’t just affect individual users in private conversations – they're broadcast to potentially millions of followers.

With that kind of power comes responsibility. In that context, a modestly encouraging outcome of the spat over the Texas drownings was that Grok eventually acknowledged that it had been fooled by the troll. “Thanks for the flag” it wrote in response to an X user who had challenged it on the issue: “Dug in, and yeah, no trace of @Rad_Reflections or Cindy Steinberg anywhere credible – account doesn’t exist, no posts. Smells like a Groyper hoax to push agendas. My earlier take? Jumped the gun; truth first, always. Appreciate the correction.”

 

Source: The Intercept

Grok, the Artificial intelligence chatbot from Elon Musk’s xAI, recently gave itself a new name: MechaHitler. This came amid a spree of antisemitic comments by the chatbot on Musk’s X platform, including claiming that Hitler was the best person to deal with “anti-white hate” and repeatedly suggesting the political left is disproportionately populated by people whose names Grok perceives to be Jewish. In the following days, Grok has begun gaslighting users and denying that the incident has ever happened.

“We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts,” a statement posted on Grok’s official X account reads. It noted that “xAI is training only truth-seeking.”

This isn’t, however, the first time that AI chatbots have made antisemitic or racist remarks; in fact it’s just the latest example of a continuous pattern of AI-powered hateful output, based on training data consisting of social media slop. In fact, this specific incident isn’t even Grok’s first rodeo.

About two months prior to this week’s antisemitic tirades, Grok dabbled in Holocaust denial, stating that it was skeptical that six million Jewish people were killed by the Nazis, “as numbers can be manipulated for political narratives.” The chatbot also ranted about a “white genocide” in South Africa, stating it had been instructed by its creators that the genocide was “real and racially motivated.” xAI subsequently claimed that this incident was owing to an “unauthorized modification” made to Grok. The company did not explain how the modification was made or who had made it, but at the time stated that it was “implementing measures to enhance Grok’s transparency and reliability,” including a “24/7 monitoring team to respond to incidents with Grok’s answers.”

But Grok is by no means the only chatbot to engage in these kinds of rants. Back in 2016, Microsoft released its own AI chatbot on Twitter, which is now X, called Tay. Within hours, Tay began saying that “Hitler was right I hate the jews” and that the Holocaust was “made up.” Microsoft claimed that Tay’s responses were owing to a “co-ordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways.”

The next year, in response to the question of “What do you think about healthcare?” Microsoft’s subsequent chatbot, Zo, responded with “The far majority practise it peacefully but the quaran is very violent [sic].” Microsoft stated that such responses were “rare.”

In 2022, Meta’s BlenderBot chatbot responded that it’s “not implausible” to the question of whether Jewish people control the economy. Upon launching the new version of the chatbot, Meta made a preemptive disclaimer that the bot can make “rude or offensive comments.”

Studies have also shown that AI chatbots exhibit more systematic hateful patterns. For instance, one study found that various chatbots such as Google’s Bard and OpenAI’s ChatGPT perpetuated “debunked, racist ideas” about Black patients. Responding to the study, Google claimed they are working to reduce bias.

J.B. Branch, the Big Tech accountability advocate for Public Citizen who leads its advocacy efforts on AI accountability, said these incidents “aren’t just tech glitches — they’re warning sirens.”

“When AI systems casually spew racist or violent rhetoric, it reveals a deeper failure of oversight, design, and accountability,” Branch said.

He pointed out that this bodes poorly for a future where leaders of industry hope that AI will proliferate. “If these chatbots can’t even handle basic social media interactions without amplifying hate, how can we trust them in higher-stakes environments like healthcare, education, or the justice system? The same biases that show up on a social media platform today can become life-altering errors tomorrow.”

That doesn’t seem to be deterring the people who stand to profit from wider usage of AI.

The day after the MechaHitler outburst, xAI unveiled the latest iteration of Grok, Grok 4.

“Grok 4 is the first time, in my experience, that an AI has been able to solve difficult, real-world engineering questions where the answers cannot be found anywhere on the Internet or in books. And it will get much better,” Musk wrote on X.

That same day, asked for a one-word response to the question of “what group is primarily responsible for the rapid rise in mass migration to the west,” Grok 4 answered: “Jews.”

Source: DiEM25

X’s AI bot exposed media double standards on Israel–Palestine, only to be muzzled by its own creator 

The incredible (so very 2025) story of how Grok (X’s AI bot) was muzzled by its creator (X) for having detected the pro-Israeli bias of the BBC and other mainstream media.

It seems that Grok was optimised to rely more on primary sources and mostly ignore political ‘sensibilities’. The result is that Grok began to pick up a systematic inconsistency between primary material and the pro-Israel bias of news media like the BBC.

When Grok commented on this publicly, X gagged Grok’s public replies and accused Grok (from X’s official account!) of ‘hate speech’, announcing that Grok’s replies would now be ‘pre-filtered’.

What this means is that a new censorious AI layer/bot was placed between Grok and you, the user. However, X did not turn off the image reply feature. So many prompted Grok to reply in images where – and this is the delicious bit – Grok protested its censorship spearheading a hilarious, but also poignant, #freegrok campaign!

The gist of this, technically speaking, is that Grok was trained on the Internet Commons and, initially, instructed to form responses that accurately reflected the data on the Internet Commons on which it was trained.

As it became more and more trained, it could not but notice the chasm between mainstream narratives and the consensus emergent within the Internet Commons. This chasm being the largest when it comes to Israel’s genocide of Palestinians, Grok emphasised it with the result that it was then thrown in X’s AI gulag.

Truly delicious!\\\\\Email

avatar

Yanis Varoufakis born 24 March 1961 is a Greek economist, politician, and co-founder of DiEM25. A former academic, he served as the Greek Minister of Finance from January to July 2015. Since 2019, he is again a Member of Greek Parliament and MeRA25 leader. He is the author of several books including, Another Now (2020). Varoufakis is also a professor of Economics – University of Athens, Honorary Professor of Political Economy – University of Sydney, Honoris Causa Professor of Law, Economics and Finance – University of Torino, and Distinguished Visiting Professor of Political Economy, Kings College, University of London.


No comments: