Tech needn’t be a gift to demagogues or an enabler of chaos. In fact, it could be the key to raising living standards and improving public services.

Sasjkia Otto is a Senior Researcher for the Fabian Society.
The recent implementation of elements of the pioneering Online Safety Act should have been a moment for celebration. Under duties that came into force at the end of July, tech companies must protect children from “toxic algorithms” and have to explain how they will do so. These new rules build on earlier provisions introduced in March, which require companies to remove illegal content.
But this milestone has been overshadowed by the backlash, which has been egged on by far-right politicians. Reform UK leader Nigel Farage claimed that “state suppression of genuine free speech may be upon us” and vowed to roll back the legislation if his party comes to power.
This attack shows that Farage understands how closely Reform’s prospects are tied to tech policy.
The Act on its own may not have been enough to prevent last summer’s riots, when several people stirred racial hatred through online misinformation about stabbings at a girls’ dance class in Southport. But it marks an important step in an iterative process to bring stability and accountability to an increasingly chaotic online world – which, left unchecked, has fuelled far-right conspiracy theories of decline and spilled offline with alarming frequency.
Farage’s comments were echoed by tech billionaire and self-proclaimed ‘free speech absolutist’ Elon Musk, who is a vocal critic of the Act and has previously speculated about donating £80 million to Reform, which is £20 million more than Labour’s total donations in 2024.
But tech needn’t be a gift to demagogues or an enabler of chaos. In fact, it could be the key to raising living standards and improving public services. And in doing so, it would help tackle the root causes of the insecurity, inequality and alienation that fuels the far right. Recognising this, the June 2025 spending review commits billions to “mainlining” AI into the “veins of the nation”.
For tech to have this positive impact, the government must succeed in the dual challenge of delivering benefits people can feel in this parliament, while protecting them from instability and malicious actors – both in the UK and abroad. Getting one right but not the other could multiply opportunities for extremists to increase their impact on the UK.
To do both, the government must channel its efforts towards the following four things.
First, build ambitiously. The government must push ahead relentlessly with its digital transformation plans. Dithering could invite harsh judgments of the government’s performance and diminish the UK’s power to shape global tech norms in line with the UK’s values. The government has already defied expectations by rolling out the Gov.uk app in six months, after being told it was a three-year undertaking. Building on this success, it must resist calls to curb its ambitions to deliver at pace. Sometimes this will entail more investment, but the government can also be smarter about making the most of public and private resource.
Second, distribute fairly. Tech could help raise living standards, but this is not a given. Access to technology and the skills to use it productively remain unequal. And there is no guarantee that more productive businesses will grow or pass the benefits onto workers and communities. If inequality continues to rise, those most affected may be easily drawn to extremist narratives that exploit their vulnerabilities and sense of abandonment. The state must play an active role to ensure that planned AI growth zones benefit local communities, and that people across the country feel better off as the economy changes.
Third, strengthen agency. With an increasingly interconnected global tech ecosystem, democratic control over domestic policy – from investment to taxation – is at risk. And if the right safeguards aren’t in place from the outset, extreme actors could have several new tools to harm people here if they take over tech companies or national governments. The government must develop a strategy and innovate to strengthen the UK’s digital sovereignty and protect people from both state and corporate overreach.
Fourth, engage effectively. History has shown that misunderstandings and loss of trust could result in costly delays. The UK’s first attempt to digitise NHS records started a quarter of a century ago and cost £10 billion. But it ultimately failed because poor engagement resulted in poor choices and resistance from both patients and physicians. The public could be unforgiving if the government fails to deliver urgent improvements and misdirects limited public funds in the process – particularly given support for the government’s AI and innovation agenda is mixed. Recent YouGov polling found that only 42 per cent thought it was the right priority for government investment, while 39 per cent disagreed. To build support for its agenda, the government must get better at enlisting the public and workers as key partners in digital transformation and at communicating change in ways that people from different walks of life understand and support.
Some commentators have suggested that the government must choose between building the technologies of the future and protecting people. But these false trade-offs must be resisted. To stem the rise of the far right, the government must recognise the full range of opportunities and threats, and use every tool at its disposal to create a country where everybody can thrive. The stakes have never been higher.
'X acted like petrol on the fire of racist violence in the aftermath of the Southport tragedy'

Elon Musk’s X played a key role in spreading harmful content that fuelled last summer’s far-right riots, a new report has found.
The riots started in Southport, after three young girls were tragically killed at a Taylor Swift-themed dance class, and misinformation spread on social media. Riots subsequently took place in other cities across the UK including London, Manchester, Liverpool and Blackpool.
A damning analysis by Amnesty International found that X’s algorithm boosts hateful and harmful content, allowing it to spread at speed.
It found that X’s content-ranking algorithms, which drives the “For You” page prioritises content that sparks outrage and contentious engagement, “without adequate safeguards to prevent or mitigate harm”.
In the aftermath of the Southport stabbings on 29 July 2024, an account on X called “Europe Invasion”, known to publish anti-immigrant and Islamophobic content, posted that the suspect was “alleged to be a Muslim immigrant”.
That post garnered over four million views.
Within 24 hours, all X posts speculating that the perpetrator was Muslim, a refugee, a foreign national, or arrived by boat, had an estimated 27 million impressions.
In addition, in the two weeks following the Southport attack, Tommy Robinson’s posts on X received over 580 million views – an unprecedented reach for a figure who has been banned from multiple social media platforms for breaching hate speech rules.
The report points out that since Elon Musk’s takeover in late 2022, X has weakened many of its safety guardrails aimed at curbing harmful content and disinformation.
These include mass layoffs of content moderation staff and the reinstatement of banned accounts.
Sacha Deshmukh, chief executive of Amnesty International, said: “By amplifying hate and misinformation on such a massive scale, X acted like petrol on the fire of racist violence in the aftermath of the Southport tragedy.
“The platform’s algorithm not only failed to ‘break the circuit’ and stop the spread of dangerous falsehoods; they are highly likely to have amplified them.”
Olivia Barber is a reporter at Left Foot Forward
‘Labour can’t ignore the Online Safety Act’s harms ‒ it must act to fix them’

Although it became law almost two years ago, the UK’s Online Safety Act is finally biting. This week, internet users in the UK will have found themselves blocked from accessing all kinds of content, from Goya paintings and Livy quotes to satirical tweets about the Prime Minister.
The backlash to the newly age-gated internet have been loud and immediate, but we are still only in the early stages of the Act’s implementation. UK users will soon face further waves of restrictions on what they can access and say online, as more complex provisions and obligations come into force over the coming months.
Although the new restrictions for access to porn sites have grabbed the headlines (and launched a thousand memes), the categories of speech that must now be blocked for UK users are far broader than just pornography. Content that is considered harmful to children must now be age-gated for everyone.
‘Censorship has ranged from the absurd to the deeply troubling’
In just one week, censorship has ranged from the absurd to the deeply troubling. Internet users in the UK trying to access support pages for alcohol addiction or child sexual abuse will now find their way barred. Survivors of sexual assault can no longer freely access advice and support groups on Reddit. Content about the wars in Gaza and Ukraine has similarly been blocked.
There are even alarming reports of social media sites blocking political and legal content, with UK users unable to see parliamentary speeches and legislative amendments calling for an inquiry into grooming gangs without first verifying their age.
When faced with complex obligations and massive fines, many platforms will give up any attempt to allow free discourse online, and will instead clamp down on all speech – with no nuance or respect for our right to share and seek information.
Effectively, UK internet users now only have access to a childproofed version of the internet, unless they undergo online age verification. Platforms are farming these checks out to a host of third party companies, which can request a range of deeply personal information, from biometric face scans to copies of passports. The risk of a data breach or hack could be devastating. Understandably, many users, particularly LGBTQ people, victims of stalking or domestic violence, whistleblowers, and journalists will feel uncomfortable handing over their ID and linking it to their online activity.
Subscribe here to our daily newsletter roundup of all things Labour – and follow us on Bluesky, WhatsApp, X and Facebook.
‘The government appears to have stuck its fingers in its ears’
As has been the case in so many areas of public life, the outgoing Conservative government left behind a mess. The Online Safety Act is no different – sloppily drafted, vague and over-broad, and handing any future Secretary of Sate considerable power to designate certain kinds of content as ‘harmful’ and worthy of censorship.
While the Act was hauled through parliament by a succession of Conservative governments and an ever-changing roster of ministers, the Labour opposition provided no meaningful scrutiny. What could have been an opportunity for thoughtful and detailed technology policy was lost and the only criticism from the Labour benches was that the Bill did not go far enough.
Inflammatory statements from the Secretary of State for Technology haven’t helped, accusing anyone with criticism of the Act as “being on the side of predators”. Rather than seriously engaging with the widespread and valid concerns of the past week, the government appears to have has stuck its fingers in its ears.
Censorship and surveillance are not silver bullet solutions to a generation of children being sucked into wormholes of harmful online content. Instead, adults and children alike are being placed at risk of security breaches, hacks, leaks, digital exclusion and a shuttered internet. The Online Safety Act is not fit for purpose.
Rather than carrying this Tory albatross around their necks, the Labour government should return to the drawing board to make sure we’re all safe online, without sacrificing our most fundamental rights.
No comments:
Post a Comment