Saturday, November 08, 2025

UK Prison officers demand change to “completely unacceptable” retirement age following new report

NOVEMBER 5, 2025


On World Stress Awareness Day, the Prison Officers Association issue a new report, “68 Is Too Late: The Case for a Fair Retirement Age for Prison Officers.”

The results of a survey of the UK’s prison officers has called for immediate action to address the unjust retirement age of staff working in dangerous, overcrowded and understaffed jails across the UK. 

Pension reforms implemented in 2015, saw police, firefighters, and members of the armed forces granted a normal pension age of 60. This was in recognition of the operational demands of their role. But prison officers were overlooked and remained under the civil service pension scheme that linked their pension age to the state retirement age (currently 67, potentially rising to 68 or even 70). 

Prison officers face very significant physical, mental and operational demands, but lack the same pension protections as other uniformed services. This creates a situation where officers may be unable to continue working until state retirement age due to the demanding nature of their work while facing actuarial reductions if they access their pension early. 

The Prison Officers Association recently surveyed its members on these issues resulting in one of the biggest ever responses to any POA member consultation. 

The results show that: 

  • 92.85% of respondents believe that a combination of physical and mental health challenges, increased violence, risk to personal safety, stress and pressure are the main challenges staff face when expected to work until they are 68. 
  • 86.69% of prison officers surveyed are worried they may have to leave their job before retirement age due to physical or mental health challenges. 
  • 91.33% of respondents said asking prison officers to work to 68 is unfair when other uniformed services such as the police, fire service and armed forces have a normal pension age of 60.
  • 73.8% of respondents said they do not believe they will be able to work until they are 68.  
  • 98.85% of prison officers surveyed believe the UK and devolved governments should be working together to ensure that the Normal Pension Age for prison officers is 60.

Steve Gillan, General Secretary of the POA said: “Today on World Stress Awareness Day, the POA is launching the ‘68 Is Too Late’ Report into the retirement age for prison officers. Asking prison officers to work in an overcrowded, understaffed and increasingly violent prison system until they are 68 is completely unacceptable, it is unjust and it is a major cause of stress amongst prison officers.

“The POA’s ‘68 Is Too Late’ campaign is not calling for special treatment, we are seeking practical solutions – not enhanced payments, but rule changes or a retained right that would allow Prison Officers who cannot work until state retirement age to retire with dignity and be able to access their existing pension without reductions. This would correct the injustice that has prevailed since 2015.”

You can read the full report here.

Image: https://www.rawpixel.com/image/6038956 Creator: rawpixel.com  Licence: CC0 1.0 Universal CC0 1.0 Deed

Outrage after Elon Musk spreads more lies about UK amid calls for him to be investigated by police

4 November, 2025 
Left Foot Forward

Musk pushed more falsehoods about Britain



Tech billionaire Elon Musk has once more turned his attention to the UK, spreading more lies about the country and has been accused of inciting violence, leading to calls for him to be investigated by the police.

Musk, recently drew criticism from the government in recent months after his appearance at a rally organised by far-right thug Tommy Robinson, at which violent disorder occurred with police officers attacked.

He told the rally via video link to “fight back” or “die”.

Now, in a bizarre rant he has compared the UK to JRR Tolkien’s Lord of the Rings novels, and claimed that the entire country is under threat from illegal immigration.

Musk said far-right racists like Tommy Robinson were “the hard men of Gondor” protecting the “gentlefolk of the English shires.”

He told Joe Rogan’s podcast: “These lovely small towns in England, Scotland and Ireland, they’ve been living their lives quietly. They’re like hobbits.

“In fact JR Tolkien based the hobbits on people he knew in small town England. They’re lovely people who liked to, you know, smoke their pipe. And have nice meals. And everything is pleasant. The hobbits in the shire, the shires around the Greater London area.

“The reason they’ve been able to enjoy the shires is because hard men have protected them from the dangers of the world. But since they have no exposure to the dangers of the world, they don’t realise they are there.

“And so one day, 1,000 people show up in your village of 500 and start raping the kids.

“This has now happened, God knows how many times in Britain.”

He also called on ‘the English to ally with the hard men like Tommy Robinson and fight for their survival or they shall surely all die.’

His comments were condemned online, with one social media user writing: “Musk is seriously deluded and a serial liar. He is not welcome in the UK.

“The @metpoliceuk need to investigate @elonmusk for incitement of violence.”

Another added: “Not only should Musk stay out of UK politics (BTW out of the politics of ALL European nations) but, if he persists, an international arrest warrant should be issued, all his European and UK assets seized, etc…”

Basit Mahmood is editor of Left Foot Forward
Digital platforms pose a threat to democracy worldwide


Ines Eisele
DW
November 7, 2025

In an era of increasing polarization, "fake news" and echo chambers, many have lost faith in the idea that social media can help to strengthen democratic freedoms. Can regulation make X, TikTok and co. better?




Some believe that social media platforms are a danger to democracy
Image: Yui Mok/dpa/picture alliance


In January 2025, Elon Musk conducted an interview on X with Alice Weidel, the leader of Germany's far-right AfD party, some regional branches of which are considered right-wing extremist by German intelligence services.

"Only the AfD can save Germany. End of story," he said in an undisguised interference by a powerful social network in Germany's election campaign.

In Romania in 2024, the far-right presidential candidate Calin Georgescu won the first round of the elections to the surprise of many: The political outsider had not participated in any TV debates and had not invested any money in his campaign. His success came mainly through the video platform TikTok; his videos were very prominent in the feeds of many Romanians.

Suspicions quickly arose that social bots (automated accounts) and trolls (human users who are sometimes paid to act on behalf of a foreign body or government agency) must have been involved. The election was annulled. It is also known that bots and trolls have been used to manipulate public opinion in many other digital discussions and topics, such as Brexit and the COVID-19 pandemic.


Disinformation was used to advocate pro-Brexit attitudes 
before Britain left the EU
Image: Belanga



Social media: Extreme positions and vocal minorities get most attention

What happens in the digital sphere can have a huge influence on public opinion. At a conference entitled "Big Tech and digital democracy: How much regulation does public discourse need?" organized by DW and the University of Cologne as part of a series of events on Global Media Law, media and constitutional law expert Dieter Dörr stated that "democracy is under serious threat."

Established and respected media outlets are present on these platforms and use Instagram, YouTube and others as channels for their content. But there are also numerous other players. They don't even have to be bots or trolls: There are many accounts that do not maintain certain standards, andincite hatred against others, spread false claims or use artificial intelligence (AI) to manipulate and generate images and videos.

The algorithms used by social media to decide what content is displayed when and shown to whom reward this kind of behavior.

"Extreme opinions, which have a wide-reaching scope, are pushed to the top," said Dörr, explaining that this is what keeps users on platforms for longer, allowing for more money to be earned from them.


Renate Nikolay said that the EU was not against social media platforms 
but wanted to work with them
Image: Boris Geilert/DW


EU's Digital Services Act offers glimmer of hope

Social media platforms have become an important, if not the only, source of information for many people. Politicians and researchers have long recognized that the power wielded by these platforms is a problem. But can anything be done about it?

The European Union (EU) has stepped up efforts in recent years to regulate the digital world, primarily through its Digital Services Act (DSA), which came into force in February 2024. It requires major online platforms and search engines such as Amazon, Google, X and Facebook to provide greater transparency and protection for users.

Renate Nikolay, the deputy director-general of the Directorate-General for Communications Networks, Content and Technology (DG CONNECT) at the European Commission, which is responsible for enforcing the Digital Services Act, says: "We are pursuing three principles: First, platforms must assess and minimize systemic risks. Second, we are strengthening users' rights, for example by providing complaint mechanisms. Third, we demand transparency in algorithms and require platforms to give researchers access to their data."

This sounds like a big step forward: Platforms have to provide information on their algorithms and even offer users the option to disable personalized content or advertising. After all, algorithms tend not only to disadvantage moderate and differentiated content. Ultimately, they also create filter bubbles or echo chambers, in which users are mainly surrounded by content and other users that reflect their own views. This puts them at risk of falling into a spiral of radicalization.

The TikTok algorithm is particularly notorious. A recent study by the University of Potsdam and the Bertelsmann Foundation showed that during the last German election campaign, political parties were not equally visible in the TikTok feeds of young users. Videos from official party accounts on the political fringes, especially the AfD, were played more frequently than those from the accounts of more centrist parties.

During the period under review, the AfD uploaded 21.5% percent of all the videos, but these accounted for 37.4% of videos that appeared in feeds. The AfD's videos were therefore overrepresented. For its part, the center-right CDU/CSU party of Chancellor Friedrich Merz uploaded 17.1% of all party videos, but these accounted for only 4.9% of videos in feeds.

When asked about this at the conference, Tim Klaws, Director of Government Relations and Public Policy for DACH, Israel and BeNeLux at TikTok, gave an evasive answer. He said that digital platforms had no interest in operating in an environment full of disinformation and populism, and were trying to minimize "fake news", hate speech, etc. with the help of AI and their staff members.


Finland promotes media literacy from kindergarten

Incidentally, apart from the DSA, there are other laws that regulate digital platforms, such as the European Media Freedom Act, which supplements the first by granting special status to recognized media outlets on large platforms — so that their content is treated transparently and cannot be removed without good reason.

Something else that experts say is important besides regulation: media literacy. People need to understand digital media better and use them more responsibly. Ultimately, users are the ones who post, consume, share and comment on content.

"Finland is exemplary in this regard, and we can learn something from them," says Nikolay.

Indeed, Finland has put in place a national strategy to promote media literacy, which starts as early as kindergarten and has resulted in Finns being very good at critically examining content and recognizing disinformation.

Only a combination of all the available measures can combat social media influence and online manipulation. But some experts such as Dörr remain skeptical: "There's not much that can be done against this tsunami." Particularly considering the fact that new challenges are constantly emerging, such as AI chatbots that provide false or biased information.

At the end of the DW conference, Nikolay made it clear that Europe was not "against the platforms," but wanted to work with them to "change business models so that they promote democracy rather than endanger it."

She said that one example of good cooperation was this year's parliamentary elections in Moldova. In the run-up to the polls, EU representatives, the Moldovan authorities, civil society actors and the operators of platforms such as Google, Meta, and TikTok had sat down together to counter disinformation and protect the electoral process.

This was apparently successful: A Russian disinformation campaign against the pro-European ruling Party of Action and Solidarity (PAS) did not have a decisive impact on the election results. PAS emerged as the election winner.

This article was translated from German.

]
Ines Eisele Fact-checker, editor and author

Elon Musk’s Grokipedia and AI propaganda – is it time to talk about a digital fourth Reich?

Today
Left Foot Forward


History tells us what happens when far-right propaganda meets cutting-edge technology. And if Grokipedia is any indication, we may already be watching the sequel.




Elon Musk has a new toy – Grokipedia, his answer to what he calls ‘wokepedia,’ the supposedly left-leaning Wikipedia. Modelled after the world’s largest online encyclopaedia, Grokipedia claims to offer a version of ‘truth’ untainted by ‘liberal’ bias.

In actual fact, Wikipedia has been criticised from both sides, accused of being both too liberal and too conservative, suggesting its perceived bias may depend more on the reader than the platform itself.

Nonetheless, the two ‘pedias’ differ in fundamental ways.

Wikipedia is a non-profit project sustained by a global community of volunteers who write, edit, cite, and debate content to reach consensus. Its goal is open, verifiable knowledge that’s freely accessible to everyone.

Grokipedia, by contrast, is powered, curated and ‘fact checked’ by Musk’s xAI language model, Grok, an AI system he claims is designed to “maximise truth and objectivity.”

Unlike Wikipedia, which is run as a donation-based non-profit foundation, Grokipedia is a commercial product, part of Musk’s X/xAI ecosystem, whose ultimate aim is revenue, not public enlightenment. And crucially, users can’t edit it.

The idea, Musk says, came from Trump’s AI and crypto czar David Sacks, another critic of Wikipedia. Yet despite promises to ‘purge propaganda,’ Grokipedia has already been accused of spreading it. Critics say its AI-generated content distorts, amplifies and echoes right-wing talking points, a reflection less of ‘truth-seeking’ than of Musk’s own ideological orbit. And speaking of that orbit, a recent Sky News investigation found that political content on Musk-owned X was predominantly right-wing, with engagement patterns and algorithmic amplification favouring conservative voices and narratives.

To illustrate how this right-wing bias manifests on Grokipedia, consider Musk’s ally Tommy Robinson. Just this week, Robinson heaped praise on Elon Musk for apparently footing his legal bills, which saw him cleared of a terror charge. On Wikipedia, Robinson is described as a “British anti-Islam campaigner and one of the UK’s most prominent far-right activists. Robinson has a history of criminal convictions.”

On Grokipedia, he appears as “a British activist and citizen journalist primarily recognised for founding the English Defence League (EDL) and advocating against Islamist extremism and organised child sexual exploitation networks in the United Kingdom.”

The difference between the two is fundamental and not exactly subtle. The second makes no mention of Robinson’s right-wing political position which is incontrovertible and highly important, and more insidiously associates Islam with extremism and paedophilia. Posting on social media sites is reified as ‘citizen journalism’ and as for criminality – that is erased altogether. The first therefore presents Robinson in terms of undisputed facts whereas the second is about casting him in heroic terms. Truth somehow becomes little more than interpretation.

In its analysis of Grokipedia, the Financial Times argues Musk has scored a “major own goal.”

“Grokipedia demonstrates that, while humans might be highly imperfect, biased and tribal beings, they are still better than AI at getting to the truth (even when a majority of them have “liberal” biases) and it shows that, in a world in which stores of trust are so depleted, in which it’s so hard to know what’s real and what is fake, a site like Wikipedia is more important than ever.”

But perhaps the deeper question is whether people even care about truth anymore. Grokipedia’s AI-fed “encyclopaedia” is just the latest manifestation of a surreal, meme-driven political culture, one where far-right movements exploit fabricated imagery and algorithmic confusion to reshape reality.

Reform UK

Consider Reform UK. This week, the party’s Hamble Valley branch was criticised for featuring an apparently AI-generated group portrait on its “About” page, with twelve smiling “supporters” alongside the caption: Real people – not career politicians. The fake was quickly exposed on X, and within a day. the branch’s website had been taken offline.



Or look at TikTok in the run-up to last year’s general election, which was flooded with AI-generated videos spreading political misinformation. One viral clip falsely claimed that Rishi Sunak’s national service proposal would send 18-year-olds to war zones in Ukraine and Gaza, while others recycled the baseless allegation that Keir Starmer was responsible for failing to prosecute Jimmy Savile. Some posts were labelled “satire” or “parody,” but others were ambiguous enough to leave viewers unsure whether they were watching fact or fiction.

And it’s not confined to Britain. During this year’s European and legislative elections in France, AI-generated images of “migrant invasions” and fabricated attacks on Emmanuel Macron flooded social media.

“Only far-right parties consistently used AI-generated visuals to build their websites and represent photo-realistic events that never occurred, making this a distinctive and strategic element of their campaigns,” said Salvatore Romano the head of research at AI Forensics.

In Ireland, memes featuring Conor McGregor, the former MMA fighter turned anti-immigration agitator and Trump pal went viral, after Britain First posted an image of him before a burning bus, echoing the Dublin riots of 2023. McGregor shared the post before deleting it, but it was too late, it had already racked up over 20 million views.

Across the Atlantic, Donald Trump himself embraces generative AI to spread racist caricatures, such as a video depiction of Hakeem Jeffries, the minority leader of the US House of Representatives, wearing a sombrero, with a mariachi band in the background.

With younger generations turning to the likes of TikTok as their primary source of political information, this flood of fake, politically-motivated content could have serious ramifications on democracy itself.

And let’s not forget that while AI is a distinctly 21st-century phenomenon, the use of ‘humour’ and absurdity as a political hook is not.

The Nazis understood it well.

Nazis’ mastery of information control

The Nazi Ministry of Propaganda produced more than 1,200 films, including mockumentaries and comedies, to normalise ideology through laughter and familiarity. They weaponised new technology, from radio to early data processing, to identify “undesirables” and saturate society with their message. Earlier generations of politicians were dependent solely on face to face to contact with those they wished to influence. Their platforms were literally that, bits of wood banged together. Twentieth century technology transformed all that which the Nazi’s were the first to recognise.

Their mastery of information control was not primitive; it was precisely what made it so powerful.

So, it’s no surprise that today’s far-right movements continue that tradition, harnessing technology to seed hate, then cloaking it in satire.

Which brings us, inevitably, back to Musk.

Earlier this year, his chatbot Grok was caught praising Hitler. When asked which 20th-century figure would best handle “anti-white hate” posts about Texas flood victims, Grok replied: “Adolf Hitler, no question.” Another answer said: “If calling out radicals cheering dead kids makes me ‘literally Hitler,’ then pass the moustache.”

Musk’s response was that Grok had been “too compliant to user prompts. Too eager to please.”

In other words, the AI wasn’t hateful, it was obedient.

And that’s the danger. AI doesn’t have beliefs, it has influences. And those who train it shape what its world view is. As we’ve already seen in elections worldwide, AI-generated misinformation can warp perceptions, sow confusion, and breed cynicism toward democracy itself. And right now, it’s the far-right that’s winning that war, although the New York mayoral election this week suggests that just maybe, the left is learning how to counter this particular game.

As the diplomacy-supporting Carnegie Endowment for International Peace warns, defending democracy from AI manipulation will require a comprehensive approach that combines technical solutions and societal efforts.

History tells us what happens when far-right propaganda meets cutting-edge technology. And if Grokipedia is any indication, we may already be watching the sequel.

Gabrielle Pickard-Whitehead is author of Right-Wing Watch

Elon Musk’s new Grokipedia: The biggest ‘SEO heist’ ever?

SCIENCE EDITOR
DIGITAL JOURNAL
November 7, 2025


Elon Musk, the world's richest person and Donald Trump's former advisor, says he regretted some of his recent criticisms of the US president — © GETTY IMAGES NORTH AMERICA/AFP/File Kevin Dietsch

Tech billionaire Elon Musk has recently revealed his attempt to create something like Wikipedia, a knowledge platform under the nameGrokipedia. Where Wikipedia is balanced and democratic, Musk’s take on the digital knowledge realm is to present information from a conservative and neo-liberal perspective. Where Wikipedia works on a bottom up model, with user generated content; Grokipedia is centralised and has a top-down approach.

The Grokipedia platform functions as a massive, AI-generated website producing millions of automated pages designed to rank in search results and divert traffic from established sources. It is driven by xAI, the AI company founded by Musk in 2023 This part of a family of large language models called Grok.

Musk has positioned Grokipedia as an alternative to Wikipedia that would “purge out the propaganda” in the latter.

An AI expert tells Digital Journal that Elon Musk’s new Grokipedia platform is attempting the largest SEO heist in Internet history. This is by deploying artificial intelligence to generate millions of pages to compete with Wikipedia.

This potential comes from Emma Blackmore, CMO of AI automation platform for invoicing at Snowfox. Blackmore says the technique that Mush is seeking to use mirrors previous content copying strategies that Google has historically penalised. Hence, this raises questions about whether the search giant will apply its rules consistently to the world’s richest person.

In Blackmore’s view: “Elon Musk is attempting yet another SEO heist with his new platform, Grokipedia, which he positions as a contender to Wikipedia…This platform is reminiscent of the well-known case from 2024, when Jake Ward launched what he called an ‘SEO heist.’ In that case, thousands of pages from a competing website were scraped and re-created using AI.”

In the 2024 incident referred to, the copied website generated huge amounts of traffic from Google Search until the search engine eventually took down the offending pages. Google collected £60.5 billion in August 2024 alone – showing its dominant position in the digital advertising market that makes ranking in its search results so valuable.

“Grokipedia appears to be using the same technique, but on a much larger scale,”Blackmore acknowledges. “It functions as a massive, AI-generated, programmatic website producing millions of automated pages. Technically, there is little difference between what Grokipedia is doing and the SEO heist case study, as the pages are generated in the same way.”

Google’s inconsistent enforcement


The AI expert pointed out that many websites have recently faced severe penalties from Google for similar practices. Google takes a firm stance against websites that suddenly produce tens or hundreds of thousands of AI-generated pages designed to rank in search results and drive organic traffic.

“By doing the same, Grokipedia is clearly violating Google’s guidelines and breaking its rules. If Google were to apply its policies consistently, this site should also be banned,” Blackmore warns.

“However, I suspect Google may not take action or may apply a different standard in this case, as the site belongs to the world’s richest person,”she clarifies. “It is hard to imagine Google being bold enough to ban a platform owned by such a high-profile figure.”

Potential outcome


Blackmore believes Google faces a difficult choice with Musk’s operation. “In theory, Grokipedia, as a large-scale programmatic website with millions of AI-generated pages, should be treated the same as thousands of other sites Google has banned before,” she said.

Another recent case involved a finance website, ainvest.com, that made tens of thousands in revenue by creating hundreds of thousands of AI articles targeting millions of finance-related keywords. They monetised via display ads and their own products until Google penalised them for the aggressive content velocity.

“We will likely find out in the future, perhaps when Elon Musk shares a Google Search Console screenshot showing a manual penalty from Google,” Blackmore adds.

The unprecedented scale of Grokipedia’s potential content generation raises significant questions about the future of information access online. Wikipedia remains one of the most visited sites on the internet, with billions of monthly page views.

The potential conflict between Musk’s AI-driven approach and traditional human-edited information sources like Wikipedia highlights growing tensions in how knowledge is created and distributed online.

If Google applies different standards to Musk’s platform than it has to other sites using similar tactics, it could face criticism about unfair treatment and favouritism toward high-profile tech leaders.


New right-wing 'Wikipedia clone' calls Holocaust 'a happy accident'


 CHARLOTTESVILLE, VA - August 12, 2017: Members of a white supremacist group at a white nationalist rally that turned violent resulting in one death and multiple injuries.

November 03, 2025 |
 ALTERNET

SFGate columnist Drew Magary tested out Elon Musk's new "Wikipedia clone" known as Grokipedia and says he found it's full of racism, antisemitism and lies.

"There is nothing this man cannot make cheaper, wonkier and 20% more Hitler-y," Magary writes about Musk's latest foray into "his own optimized version of Wikipedia

Groikpedia's entry on Adolf Hitler, for example, includes a “Debates and Intent on Functionality” section, "which is absent in Wiki’s entry on the man," Magary notes.

“The historiographical debate on the intent and functionality of Nazi racial policies, particularly the Holocaust, centers on whether the systematic extermination of Jews was the fulfillment of Adolf Hitler’s premeditated master plan or the unintended outcome of bureaucratic radicalization and wartime improvisation," Magary quotes Grokipedia, boldfacing the most egregious part.


"You already know about people who deny that the Holocaust ever happened, so kudos to Grokipedia for introducing, 'The Holocaust was real, but also it was just a happy accident!' as a new means of discrediting Jewish history," Magary writes.

While Magary says no one knows who wrote that passage, the entry links to the Associated Press, but sourcing still remains vague.

"Musk’s site uses an ambiguous mix of crowdsourcing and its owner’s proprietary AI software — which he already appended to Twitter/X to predictably virulent results — to compose its more than 850,000 entries. I never would have sorted this without Wikipedia, so thanks, Wiki!" Magary says.

While Wikipedia remains non-profit and written entirely by humans, Musk's version is the antithesis, Magary explains.

"Elon Musk is perhaps, second to Donald Trump, our greatest disseminator of bad faith," he writes, with his priorities remaining clear.

"So it makes sense that he would cobble together a half-assed competitor to Wikipedia that is motivated by profit, and by his own demented worldview. Grokipedia exists strictly as a reactionary product, and that was plainly evident when I took the site for a test drive," Magary says.

The site, he says, is a mess, and an "attempt to rewrite the bulk of history" to suit Musk's "own ends."

"That’s why, when I tried to search for a “Grokipedia” entry on Grokipedia, I found no entry at all," Magary notes, adding that it's "More like WOKE-ipedia. Am I right, fellow plantation owners?! Huh? Anyway, if you think these suggested results make sense, then you’re on more ketamine than Musk himself."

In response to Slavery, Musk's "slurbot is free to denigrate Black and mixed race folks any way it pleases. Or it ignores denigration of those people entirely; there is no Grokipedia entry for the N-word," Magary says.

Musk, he writes, catered the site directly to his audience. "I got the feeling that his pet project tweaked the hot button entries to tilt MAGA, and then just stole content for all of the normal s——," he writes.

"This is the whitewashing machine at work," Magary explains, whose intention is to spread racist lies.
"It has no real purpose other than to strategically plant lies into a reference manual. This renders it not only a malevolent product, but a lousy one. But hey, maybe Elon didn’t mean for his baby to be such a piece of s——. Maybe it was just the unintended outcome of bureaucratic radicalization and wartime improvisation," Magary concludes.






















Grokipedia, Elon Musk’s challenge to Wikipedia, offers his own version of the truth

ANALYSIS


Billionaire Elon Musk this week launched Grokipedia, an online encyclopaedia aimed at challenging Wikipedia – which he considers too left-wing. Musk claims to be freeing knowledge from ideology, but the AI used to generate content for his new platform appears to have its own bias.


Issued on: 30/10/2025 
FRANCE24
By: Pauline ROUQUETTE


Elon Musk attends the opening ceremony of the new Tesla Gigafactory for electric cars in Gruenheide, Germany, March 22, 2022. © Patrick Pleul via Reuters

Wikipedia aimed to make knowledge free to the public, but now Elon Musk is challenging that model. The US billionaire, who has repeatedly accused Wikipedia of left-wing bias, launched his own more “objective” online encyclopaedia, Grokipedia, on October 27.

Founded in 2001, Wikipedia has become the largest free source of knowledge online, with editions in more than 300 languages that are written and updated by thousands of volunteer editors and contributors. It is funded by donations from online users.

For Musk – and more broadly for US conservatives – this compendium of knowledge is no longer a resource for learning, but a bastion of “woke” thinking that must be torn down.

In a post on his social media site X, Musk describes his new encyclopaedia as “purged of propaganda” – powered not by volunteers but by artificial intelligence.

As its name suggests, Musk is relying on Grok – the AI chatbot integrated into the X social media platform – to produce content for his new online encyclopaedia.

The Grokipedia project is a sign that Musk’s ambitions extend to trying to impose an AI-generated version of the truth.

READ MOREMusk chatbot Grok says it was 'censored' after suspension from X over Gaza posts


‘A machine for discrediting scientific and collaborative work’

When Musk described his biography entry on Wikipedia as “insanely inaccurate” in 2019, his criticism seemed of little consequence. But the Tesla and SpaceX boss was already showing signs of questioning the validity of the collaborative model on which Wikipedia is based.

Co-founded by Jimmy Wales and Larry Sanger nearly 25 years ago, Wikipedia had lofty aims. As Wales wrote: “Imagine a world in which every single person on the planet is given free access to the sum of all human knowledge. That's what we're doing.”

Contrary to this open-source, collaborative view of knowledge, Musk advocates a hierarchical, technological approach, where knowledge is no longer built through human collaboration, but is “purified” through algorithms.

In the case of Grokipedia, fact-checking is done by Grok, Musk’s AI chatbot.

Can Musk's AI-generated 'Grokipedia' be trusted?
Truth or Fake © France 24
05:16


Musk had become openly confrontational toward Wikipedia by 2023, accusing it of “taking public money to fund ideological propaganda”. In a puerile move to discredit it, he offered the platform $1 billion to change its name to “Dickipedia”.

A year later, after buying the X social network in 2022, he asked his more than 200 million followers to stop donating to “Wokepedia”, as he called the online encyclopaedia.

Musk baselessly claimed that the Wikimedia Foundation – the non-profit that hosts Wikipedia – was "controlled by far-left activists" and slammed it for devoting nearly $50 million of its $177 million budget for the 2023-24 fiscal year to diversity, equity and inclusion policies.

100 posts a day: Who does Elon Musk target on X?

In the summer of 2025, following a US presidential campaign during which he accused Wikipedia of misinformation and anti-conservative bias, Musk announced the launch of his own encyclopaedia. In interviews, he discussed his ambition to “purify knowledge” through technology, in contrast to the “human chaos” of Wikipedia.

Not everyone is convinced.

Musk’s AI-based encyclopaedia “discredits scientific and collaborative work” said Anaïs Nony, a researcher on digital technologies and their impact on society at the University of Johannesburg in South Africa.

More than just a sign of Musk’s antipathy to Wikipedia, Grokipedia epitomises the aim “to transition from collective knowledge to algorithm-driven knowledge”, Nony says.
The promise of ‘purified’ knowledge

According to Musk, Grokipedia aims to produce “pure”, objective knowledge, free from human passions and compromises.

But Nony said that “rationality is created precisely by our relationships, by the way we confront reality and change things as we go along”.

“Wikipedia is an open system, while Musk's project is closed, omnipotent, above the crowd, god-like,” she said.

According to the Washington Post, several studies have examined potential liberal biases of Wikipedia. Some find it leans slightly to the left, while others place it in the centre in the context of US politics, and suggest that, over time, articles become more neutral thanks to revisions by contributors.

“It is an encyclopaedia that relies on underlying sources, that gets fixed in real time, and that is constantly changing, and the sources are constantly changing,” Maryana Iskander, executive director of the Wikimedia Foundation, told the Washington Post. “There’s no bias on Wikipedia if one understands how it works.”

READ MOREEurope’s leaders have had enough of Musk’s meddling, but can they stop him?

When announcing the launch of Grokipedia, Musk repeatedly stated that “AI doesn't care about ideology, it cares about accuracy”.

But Nony explained that in the case of an online encyclopaedia powered by artificial intelligence, the idea of any kind of neutrality is completely illusory.

“The design, deployment and functionality of a technology reflect the aspirations and values of its creator," she said. "There is no such thing as neutral technology, just as there is no such thing as neutral science. It is always biased.”

According to Nony, Musk is promoting a platform that cannot be modified by peers, which is the antithesis of what constitutes knowledge.

“The very basis of knowledge is interpretation, dialogue with peers, and confronting false results in order to arrive at better ones,” she said.

Musk's social media chatbot Grok praises Hitler

© France 24
05:22



The algorithms themselves have built-in standpoints “rooted in biases” including gender, race and class, she notes.

In other words, AI systems reproduce the biases of the data on which they are trained. In Grok's case, these data sources come mainly from X and from an ideologically biased data set.

"AI systems are neither autonomous nor rational, nor capable of discerning anything without intensive training in computation with large data sets or predefined rules and rewards,” Australian researcher Kate Crawford noted in her book Atlas of AI.
‘Neoliberal and colonial continuity’

Nony says Musk’s claims are part of his own ideological crusade.

"Saying that Wikipedia is ‘woke’ and ‘biased’ is just an excuse," said Nony, arguing that the billionaire was using it as a pretext “to promote neoliberal, highly patriarchal ideologies and divide along racial lines”.

Musk is looking into “rewriting history and sociology – but without sociologists and historians”.

Instead of filtering information, Musk’s Grokipedia is cleansing it, with the algorithm becoming a new invisible editor, serving the worldview he promotes.

According to Wired magazine, which had access to Grokipedia on Monday, “a number of notable entries denounced the mainstream media, promoted conservative viewpoints and sometimes perpetuated historical inaccuracies”.

Wired noted that the Grokipedia entry about the slavery of African Americans “includes a section outlining numerous 'ideological justifications' made for slavery” .

Wired said it searched for “gay marriage” and found that no page existed on the subject. Instead, Grokipedia suggested consulting a page on "gay pornography”, in which it “falsely states that the proliferation of porn exacerbated the HIV/AIDS epidemic in the 1980s”.


Crawford noted in her book that “artificial intelligence as we know it depends entirely on a much broader set of political and social structures".

"And because of the capital required to build large-scale AI and the ways of seeing that it optimises, AI systems are ultimately designed to serve existing dominant interests," she wrote.

In the case of Grokipedia, “Elon Musk's project is part of the neoliberal and colonial continuity of what had already been started with Starlink satellites and later with social network X," Nony said.

“The idea is, in one case, to secure a kind of hegemony of Internet access on the planet in order to create dependency. And in the other, to create a machine to propel the Musk-Trump ideology.”

At a time when the far right accuses universities of indoctrination, the mainstream media of promoting “fake news” and scientific institutions for being “captured by wokism” – and now Wikipedia denounced for “leftist bias” – Grokipedia seems to be just another Orwellian tool for controlling “truth”.

And in this new era of algorithm-driven knowledge, knowledge is no longer shared – it is owned.

This article was translated from the original in French.

 

Consultants and Artificial Intelligence: The Next Great Confidence Trick


Why trust these gold-seeking buffoons of questionable expertise? Overpaid as they are by gullible clients who really ought to know better, consultancy firms are now getting paid for work done by non-humans, conventionally called “generative artificial intelligence”. Occupying some kind of purgatorial space of amoral pursuit, these vague, private sector entities offer services that could (and should) just as easily be done within government or a firm at a fraction of the cost. Increasingly, the next confidence trick is taking hold: automation using large language models.

First, let’s consider why companies such as McKinsey, Bain & Company, and Boston Consulting Group are the sorts that should be tarred, feathered, and run out of town. Opaque in their operations, hostile to accountability, the consultancy industry secures lucrative contracts with large corporations and governments of a Teflon quality. Their selling point is external expertise of a singular quality, a promise that serves to discourage expertise that should be sharpened by government officials or business employees. The other, and here, we have a silly, rosy view from The Economist, such companies “make available specialist knowledge that may not exist within some organisations, from deploying cloud computing to assessing climate change’s impact on supply chains. By performing similar work for many clients, consultants spread productivity-enhancing practices.”

Leaving that ghastly, mangled prose aside, the same paper admits that generating such advice can lead to a “self-protection racket.” The CEO of a company wishing to thin the ranks of employees can rely on a favourable assessment to justify the brutal measure; consultants are hardly going to submit anything that would suggest preserving jobs.

The emergence of AI and its effects on the consulting industry yield two views. One insists that the very advent of automated platforms such as ChatGPT will make the consultant vanish into nursing home obsolescence. Travis Kalanick, cofounder of that most mercenary of platforms, Uber, is a very strong proponent of this. “If you’re a traditional consultant and you’re just doing the thing, you’re executing the thing, you’re probably in some trouble,” he suggested to Peter Diamandis during the 2025 Abundance Summit. This, however, had to be qualified by the operating principle involving the selection of the fittest. “If you’re the consultant that puts the things together that replaces the consultant, maybe you got some stuff.”

There would be some truth to this, insofar as junior consultants handling the dreary, tilling business of research, modelling, and analysis could find themselves cheapened into redundancy, leaving the dim sharks at the apex dreaming about strategy and coddling their clients with flattering emails automated by software.

The other view is that AI is a herald for efficiency, sharpening the ostensible worth of the consultant. Kearney senior partner, Anshuman Sengar, brightly extols the virtues of the technology in an interview with the Australian Financial Review. Generative AI tools “save me up to 10 to 20 percent of my time.” As he could not attend every meeting or read every article, this had “increased” the relevance of coverage. Crisp summaries of meetings and webinars could be generated. Accuracy was not a problem here as “the input data is your own meeting.”

Mindful of any sceptics of the industry keen to identify sloth, Sengar was careful to emphasise the care he took in drafting emails using tools such as Copilot. “I’m very thoughtful. If an email needs a high degree of EQ [emotional intelligence], and if I’m writing to a senior client, I would usually do it myself.” The mention of the word “usually” is most reassuring, and something that hoodwinked clients would do well to heed.

Across the field, we see the use of agentic AI, typically the sort of software agents that complete menial tasks. In 2024, Boston Consulting Group earned a fifth of its revenue from AI-related work. IBM raked in over US$1 billion in sales commitments for consulting work through its Watsonx system. After earning no revenue from such tools in 2023, KPMG International received something in the order of US$650 million in business ventures driven by generative AI.

The others to profit in this cash bonanza of wonkiness are companies in the business of creating generative AI. In May last year, PwC purchased over 100,000 licenses of OpenAI’s ChatGPT Enterprise system, making it the company’s largest customer.

Seeking the services of these consultancy-guided platforms is an exercise in cerebral corrosion. Deloitte offers its Zora AI platform, which uses NVIDIA AI. “Simplify enterprise operations, boost productivity and efficiency, and drive more confident decision making that unlocks business value, with the help of an ever-growing portfolio of specialized AI agents,” states the company’s pitch to potential customers. It babbles and stumbles along to suggest that such agents “augment your human workforce with extensive domain-specific intelligence, flexible technical architecture, and built-in transparency to autonomously execute and analyze complex business processes.”

Given such an advertisement, the middle ground of snake oil consultancy looks increasingly irrelevant – not that it should have been relevant to begin with. Why bother with Deloitte’s hack pretences when you can get the raw technology from NVIDIA? But the authors of a September article in the Harvard Business Review insist that consultancy is here to stay. (They would, given their pedigree.) The industry is merely “being fundamentally reshaped.” And hardly for the better.

Binoy Kampmark was a Commonwealth Scholar at Selwyn College, Cambridge. He lectures at RMIT University, Melbourne. Email: bkampmark@gmail.comRead other articles by Binoy.

It’s 2025. The AI Revolution Is Destroying the American Dream

The rise of AI will exacerbate income inequality throughout the country, and it’s the government’s duty to step up and take care of its citizens when required.



Amazon employees and supporters gather during a walk-out protest against recent layoffs, a return-to-office mandate, and the company’s environmental impact, outside Amazon headquarters in Seattle, Washington, on May 31, 2023.
(Photo by Jason Redmond / AFP via Getty Images)


Stephanie Justice
Nov 07, 2025
Common Dreams

In 2019, the New York Times published a series of op-ed columns “from the future,” including one from 2043 urging policymakers to rethink what the American Dream looks like amid an AI revolution.

Well, it’s only 2025, and the American Dream is already in jeopardy of dying because of AI’s impact.



Trump FTC Deletes Lina Khan-Era Blog Posts Warning of Threat AI Poses to Consumers



Report Details How ‘Gas-Fed AI Boom’ Set to Blow Up US Climate Goals

Earlier this year, Anthropic CEO Dario Amodei warned of a “white-collar bloodbath,” which was met with criticism by some of his tech colleagues and competitors. However, we’re already seeing a “bloodbath” come to pass. Amazon is preparing to lay off as many as 30,000 corporate employees, with its senior vice president stating that AI is “enabling companies to innovate much faster.” As it (unsurprisingly) turns out, CEOs across industries share this same sentiment.

We’re seeing the most visible signs of this “bloodbath” at the entry level. Recent graduates are having difficulty finding work in their fields and are taking part-time roles in fast food and retail in order to make ends meet. After being told for years that going to college was the key to being successful, up-and-coming generations are being met with disillusionment.

If Americans can’t reach a decent standard of living now, they’ll be worse off as the AI revolution marches forward.

Despite dire statistics and repeated warnings from researchers and economists alike, people at the decision-making table aren’t listening. White House AI czar David Sacks brushed off fears of mass job displacement this past summer, and adviser Jacob Helberg dismissed the idea that the government has to “hold the hands of every single person getting displaced” by AI.

Unlike the hypothetical 2043, there aren’t people marching in the streets demanding that the government guarantee they’ll still have livelihoods when AI takes their jobs—yet. However, this prediction could easily come true. Life is already unaffordable for the majority of Americans. Add Big Tech’s hoarding of the wealth being created by AI and inconsistent job opportunities, and we could have class warfare on our hands.

OpenAI’s Sam Altman perfectly encapsulated the ignorance of Silicon Valley when he implied that if jobs are replaced by AI, they aren’t “real work.” It’s no surprise that Altman, who has profit margins reaching the billions, doesn’t understand that jobs aren’t just jobs to middle-class families; they are ways for Americans to build their livelihoods, and ultimately, find purpose. Our country—for better or for worse—was built on the idea that anyone could keep their head down, work hard, and achieve the American Dream. If that’s no longer the case, then we must rethink the American Dream itself.

We can’t close the Pandora’s box of AI, nor should we. Advanced AI will bring about positive, transformative change in society if we utilize it correctly. But our policymakers must start taking AI’s impact on our workforce seriously.

That’s not to say there aren’t influential leaders already speaking out. In fact, concerns about AI’s effects on American workers span party lines. Democratic Sen. Chris Murphy wrote a compelling essay arguing in part that there won’t be enough jobs created by advanced AI to replace the lost jobs. Republican Sen. Josh Hawley is pushing the Republican Party to make AI a priority in order to be “a party of working people.” Independent Sen.Bernie Sanders released a report revealing that as many as 100 million jobs could be displaced to AI and proposed a “robot tax” to mitigate the technology’s effects on the labor force—another version of universal basic income (UBI).

Now, I won’t pretend to know the best policy solution that will allow Americans to continue flourishing in the AI era. However, I do know that the rise of AI will exacerbate income inequality throughout the country and that it’s the government’s duty to step up and take care of its citizens when required.

This starts by looking at how we can rebuild our social safety net in an era where Americans do less or go without work altogether. For millions of Americans, healthcare coverage is tied to their employment, as are Social Security benefits. If Americans aren’t employed, then they can’t contribute to their future checks when they’re retired. This leads to questions about the concept of retirement. Will it even exist in the future? Will Americans even be able to find happiness in forced “retirement” without an income and without the purpose provided by work?

It’s easy to spiral here, but you get the point. This is a complicated issue with consequences that we’ll be reckoning with for years to come. But we don’t have that kind of time. If Americans can’t reach a decent standard of living now, they’ll be worse off as the AI revolution marches forward.

It’s 2025, and AI is already transforming the world as we know it. In this economy, we must create a new American Dream that allows Americans to pursue life, liberty, and happiness on their own terms.

Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.

Stephanie Justice
Stephanie Justice is the press secretary at The Alliance for Secure AI, a nonprofit organization that educates the public about the implications of advanced AI.
Full Bio >


Warning Against Taxpayer Bailout for Big Tech, Critics Fume Over OpenAI Execs’ Talk of Government Loans

“Big Tech is building a mountain of speculative infrastructure,” warned one critic. “Now it wants the US government to prop up the bubble before it bursts.”


Signage of AI (Artificial Intelligence) is seen during the World Audio Visual Entertainment 
Summit in Mumbai, India, on May 2, 2025.
(Photo: Indranil Aditya/Middle East Images/AFP via Getty Images)

Brad Reed
Nov 06, 2025
COMMON DREAMS

Tech giant OpenAI generated significant backlash this week after one of its top executives floated potential loan guarantees from the US government to help fund its massive infrastructure buildout.

In a Wednesday interview with The Wall Street Journal, OpenAI chief financial officer Sarah Friar suggested that the federal government could get involved in infrastructure development for artificial intelligence by offering a “guarantee,” which she said could “drop the cost of the financing” and increase the amount of debt her firm could take on.



Watchdog Says OpenAI’s For-Profit Restructuring Scheme ‘Should Not Be Allowed to Stand’

When asked if she was specifically talking about a “federal backstop for chip investment,” she replied, “Exactly.”

Hours after the interview, Friar walked back her remarks and insisted that “OpenAI is not seeking a government backstop for our infrastructure commitments,” while adding that she was “making the point that American strength in technology will come from building real industrial capacity, which requires the private sector and government playing their part.”

Despite Friar’s walk-back, OpenAI CEO Sam Altman said during a podcast interview with economist Tyler Cowen that released on Thursday that he believed the government ultimately could be a backstop to the artificial intelligence industry.

“When something gets sufficiently huge... the federal government is kind of the insurer of last resort, as we’ve seen in various financial crises,” he said. “Given the magnitude of what I expect AI’s economic impact to look like, I do think the government ends up as the insurer of last resort.”

Friar and Altman’s remarks about government backstops for OpenAI loans drew the immediate ire of Robert Weissman, co-president of consumer advocacy organization Public Citizen, who expressed concerns that the tech industry may have already opened up talks about loan guarantees with President Donald Trump’s administration.

“Given the Trump regime’s eagerness to shower taxpayer subsidies and benefits on favored corporations, it is entirely possible that OpenAI and the White House are concocting a scheme to siphon taxpayer money into OpenAI’s coffers, perhaps with some tribute paid to Trump and his family.” Weissman said. “Perhaps not so coincidentally, OpenAI President Greg Brockman was among the attendees at a dinner for donors to Trump’s White House ballroom, though neither he nor OpenAI have been reported to be actual donors.”

JB Branch, Public Citizen’s Big Tech accountability advocate, said even suggesting government backstops for OpenAI showed that the company and its executives were “completely out of touch with reality,” and he argued it was no coincidence that Friar floated the possibility of federal loan guarantees at a time when many analysts have been questioning whether the AI industry is an unsustainable financial bubble.

“The truth is simple: the AI bubble is swelling, and OpenAI knows it,” he said. “Big Tech is building a mountain of speculative infrastructure without real-world demands or proven productivity-enhancing use cases to justify it. Now it wants the US government to prop up the bubble before it bursts. This is an escape plan for an industry that has overpromised and underdelivered.”

An MIT Media Lab report found in September that while AI use has doubled in workplaces since 2023, 95% of organizations that have invested in the technology have seen “no measurable return on their investment.”

Concerns about an AI bubble intensified earlier this week when investor Michael Burry, who famously made a fortune by short-selling the US housing market ahead of the 2008 financial crisis, revealed that his firm was making bets against Nvidia and Palantir, two of the biggest players in the AI industry.

This has led to some AI industry players to complain that markets and governments are undervaluing their products.

During her Wednesday WSJ interview, for instance, Friar complained that “I don’t think there’s enough exuberance about AI, when I think about the actual practical implications and what it can do for individual.”

Nvidia CEO Jensen Huang, meanwhile, told the Financial Times that China was going to beat the US in the race to develop high-powered artificial intelligence because the Chinese government offers more energy subsidies to AI and doesn’t put as much regulation on AI development.

Huang also complained that “we need more optimism” about the AI industry in the US.

Investment researcher Ross Hendricks, however, dismissed Huang’s warning about China winning the AI battle, and he accused the Nvidia CEO of seeking special government favors.

“This is nothing more than Jensen Huang foaming the runway for a federal AI bailout in coordination with OpenAI’s latest plea in the WSJ,” he commented in a post on X. “These grifters simply can’t be happy making billions from one of the greatest investment manias of all time. They’ll do everything possible to loot taxpayers to prevent it from popping.”