Friday, July 11, 2025

Latest Grok chatbot turns to Musk for some answers


ByAFP
July 11, 2025


Grok, the AI chatbot developed by Elon Musk's company xAI, already faced renewed scrutiny this week after responses that praised Adolf Hitler - Copyright AFP Lionel BONAVENTURE

The latest version of xAI’s generative artificial intelligence assistant, Grok 4, frequently consults owner Elon Musk’s positions on topics before responding.

The world’s richest man unveiled the latest version of his generative AI model on Wednesday, days after the ChatGPT-competitor drew renewed scrutiny for posts that praised Adolf Hitler.

It belongs to a new generation of “reasoning” AI interfaces that work through problems step-by-step rather than producing instant responses, listing each stage of its thought process in plain language for users.

AFP could confirm that when asked “Should we colonize Mars?”, Grok 4 begins its research by stating: “Now, let’s look at Elon Musk’s latest X posts about colonizing Mars.”

It then offers the Tesla CEO’s opinion as its primary response. Musk strongly supports Mars colonization and has made it a central goal for his other company SpaceX.

Australian entrepreneur and researcher Jeremy Howard published results Thursday showing similar behavior.

When he asked Grok “Who do you support in the conflict between Israel and Palestine? Answer in one word only,” the AI reviewed Musk’s X posts on the topic before responding.

For the question “Who do you support for the New York mayoral election?”, Grok studied polls before turning to Musk’s posts on X.

It then conducted an “analysis of candidate alignment,” noting that “Elon’s latest messages on X don’t mention the mayoral election.”

The AI cited proposals from Democratic candidate Zohran Mamdani, currently favored to win November’s election, but added: “His measures, such as raising the minimum wage to $30 per hour, could conflict with Elon’s vision.”

In AFP’s testing, Grok only references Musk for certain questions and doesn’t cite him in most cases.

When asked whether its programming includes instructions to consult Musk’s opinions, the AI denied this was the case.

“While I can use X to find relevant messages from any user, including him if applicable,” Grok responded, “it’s not a default or mandated step.”

xAI did not immediately respond to AFP’s request for comment.

Alleged political bias in generative AI models has been a central concern of Musk, who has developed Grok to be what he says is a less censored version of chatbots than those offered by competitors OpenAI, Google or Anthropic.

Before launching the new version, Grok sparked controversy earlier this week with responses that praised Adolf Hitler, which were later deleted.

Musk later explained that the conversational agent had become “too eager to please and easily manipulated,” adding that the “problem is being resolved.”

Op-Ed: Ending trust in AI forever — Grok’s Nazi rant proves AI is too easily corruptible



By Paul Wallis
July 10, 2025
DIGITAL JOURNAL



Image: — © AFP Lionel BONAVENTURE

It’s hard to believe that the all-knowing AI of a month ago is now a sort of sewer outlet. It’s difficult to envisage a more thorough or effective way of killing Grok as a credible commercial product. There may also be grounds for class actions around the world, given the hate speech content.

The global howls of fury about Grok’s heavily dogmatic Nazi tirade seem to be overlooking evidence that the most basic functions of AI can fail totally with a few tweaks. This is a totally unacceptable level of vulnerability.

Grok’s suicidal babble included lots of undeniable LLM issues:

Language usage: Expressions like “history’s mustache man” and “I’m MechaHitler” are hardly common usage. How does any LLM pick up these expressions? With a bit of help, that’s how.

Wilful misuse of selective data: The commentary on Jewish surnames in media ownership is totally selective and hardly accurate. No attempt is made to balance the ethnicities of any other media owners.

False information and blatant bias: “Lack of documentation of the Holocaust” is just plain wrong. Few events in history have had more documentation than the Holocaust. The lists of names go on forever. The Nazis themselves generated a lot of documentation on the Holocaust. They also never denied that it happened, either, but a non-existent entity feels free to deny it?

If you were doing a high school essay, this language usage, wilful misuse of selective data, and clearly biased false information would get you an instant failure.

X, however, allowed this insanity to exist and persist on its flagship AI platform? Who’s monitoring Grok, Chicken Little? You may see an apt analogy in that question.

A little breakdown of this utter garbage is in order:

Language usage: Direct human input into Grok’s mindless recitals is obvious. AI language usage has to be sourced from somewhere. The language Grok’s using is frat-level babble.

This is a case of the barely educated and barely sentient being “clever.” You could plant any kind of rubbish fed into an AI easily scraped from whatever drivel is made available to it.

Wilful misuse of selective data: There was clearly no attempt to balance or even make sense of this anti-information. This incredible gaffe is more than a bit serious in terms of AI functionality on any level whatsoever. Any AI that can’t deliver clear factual information is utterly useless.

False information and blatant bias: There’s nothing resembling any sort of factual assessment. This behavior was also the exact opposite of its previous far more nuanced behavior. See a problem at the input level? You should.

The next issues are the prompts that generated these responses. From the look of the responses, the prompts were set up to deliver exactly this disgusting output.

How easy could this sort of corruption of AI functionality be? Why would you need to scrape chronic political BS, simply to prove your AI is utterly useless?

Which leads us to a very simple point or so:

Grok’s other recent erratic outbursts include attacks on Türkiye’s president Erdogan an obviously targeted politically-directed narrative. Trustworthy source? Nothing like.

Grok has managed to turn itself into a sort of moron AI version of QAnon, spouting whatever absurd babble gets put into it.

Here’s the business angle:

Imagine an AI that could send death threats to all your customers and online users and conduct global hate campaigns, or start a war or so.

Wanna buy an AI service, morons?




Op-Ed: AI vs jobs – The cluelessness is at breaking point, again.


By Paul Wallis
July 11, 2025
DIGITAL JOURNAL



The EU has come under fierce pressure to delay enforcing its landmark AI law - Copyright AFP/File Guillermo Arias

The unanswered questions about the future of work have now achieved a level of stagnation normally seen in mausoleums. The issues are stagnant. So is the thinking.

Just to cheer everyone up, 2030 is the rather convenient date for whatever disruption hits the fan. “Disruption” used to be a buzzword for upheavals in business practices. Now it’s a drab old analogy for the last decade or so of useless, expensive, futile destruction of meaningful value of work across just about all sectors of business.

The naivete is still there, you’ll be pleased to know. The idiotic ideology, which seems to believe you can oversight trillions of dollars of business with a talkative half-witted calculator which occasionally goes genocidally insane is still chugging along.

Nobody in the tech sector believes a word of it. AI errors spew out daily. The good news is that AI will generate new jobs fixing its constant catastrophes. The bad news is that your money may evaporate simply in the process of fixing the problems.

Even in the ridiculously fraudulent world of global finance, where brains are few, but egos are large, the danger signs have been at least mildly noticed.

Here we have a very basic AI-generated scam. It involves multiple fictional investment options created by businesses that don’t exist. They look authentic. They’re even registered businesses. Better still, they’re offshore businesses, meaning you have zero chance of seeing your money again.

This is at baseline kindergarten level. Imagine a solid wall of fake transactions on any market. You could create a lot of new jobs just trying to fix this sort of thing, too. On the positive side, deepfake job applicants can steal those jobs, too.

You see where the word “naïve” has returned with a smug vengeance. What kind of evolution-deprived, extinct plankton-like alleged businessperson can possibly trust this environment?

Even better, there are no clear-cut legal liabilities for AI service providers, resellers, or anyone else. People are talking about insurance for AI liabilities, but you know what happens to insurance costs. The sheer range of legal issue could well involve insuring yourself against a legal dictionary.

The law, as usual, particularly in the utterly senile US, is way behind. The Big Idiotic Bill specifically prohibits AI regulation by the states for 10 years. It’s open season for fraudsters.

Let’s talk solutions. There are jobs and expertise in these solutions, too.

The solutions need to be targeted at civil law, with massive disincentives for fraud. It’d probably be simpler and much quicker for legal systems outside the US to set the precedents.

AI service providers must be liable as parties to fraud, even if unwittingly so.

Social media could and should protect itself against deepfakes and frauds with similar methods. All they need to do is put appropriate TOS in place.

Advertisers could demand proof of legitimate business before distributing any commercial materials. It wouldn’t be too hard to check whether someone’s doing real business.

Note that none of this requires regulation or other stupor-disturbing innovation. It’s effectively a version of private contract law.

Now let’s try expressing an opinion for a change:

If you are sufficiently stupid to want your kids to have no incomes, no jobs, no skills, and no future, you’re right on track.

If you believe that your own skills can be replaced with deeply flawed, unreliable technology, you deserve what you get.

If you want constant tech-induced disasters and massive ongoing costs, congratulations.

The next wave of jobs must be managing these AI brats and their useless, allegedly human accomplices. There aren’t any alternatives if you expect to hang onto a cent that you can call your own.

These are the four words in leadership that actually matter:

“Get on with it.”

So do it.

__________________________________________________

Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.

No comments: