If the defining characteristic of Sir Keir Starmer’s Labour Government is fear — with nervousness to the right, and anxiety especially to the left — then the raw afternoon is rawest, and the dense fog is densest, and the muddy streets are the muddiest regarding the role of US corporations in the UK’s economy (apologies to Charles Dickens). At the heart of that fear sits the gigantic hyped edifice of what we now call artificial intelligence, squatting over the Starmer administration like an angry tech-god demanding constant supplication. 

Politicians have been possessed by the collapse of our economic model since 2008, and ministers repeat that AI is going to give us growth, like a mantra. There is an AI Opportunities Action Plan, connected to a demeaning ‘US-UK Technology Prosperity Deal’, but within this are educational initiatives related to AI. We need AI lesson assistants, apparently, along with various AI tools for teachers in order to speed up lesson planning and reduce workloads. 

There are positive examples: the elite chain of schools (naturally called ‘Alpha School’) wires up children for three hours a day with ‘AI’, allowing each child to move at their own pace in dynamically generated lessons, using the very latest education technology. Meanwhile, Silicon Valley entrepreneurs have moved in on the education market — sometimes directly, as in the case of the Genius Group, a ‘leading AI-powered bitcoin-first education group,’ but also sometimes less directly, as existing US Big Tech companies start to quietly incorporate generative AI functionalities into their core services. Who would want to be left out? Who would want to have to compose a letter or email from scratch? Certainly not the ministers in a precarious Labour government, who may, incidentally, need to find alternative revenue streams at some point in the next four years. 

Within the UK government, the ‘EdTech Evidence Board’ (containing Ofsted expertise) is poised to evaluate the impact of AI upon teaching and learning, and implications for future policy. However, some cynics have suggested that the ability of AI to do anything new in the classroom may, in many ways, turn out to be something of a damp squib. AI is arguably useful for reducing the workload of some computing or tech projects, such as repetitive coding tasks, indexing, or enabling agents capable of simplified remote control; equally, language learning, in theory, can be used for applied language practice, along with all of the other existing techniques for learning languages. But these are approaches which could, at a gentle and controlled gradient, augment current educational practice. 

As with so many of the UK government’s policy deliberations, there are contradictions between this and other government or public initiatives which seem difficult to resolve — largely as a result of promises made behind closed doors. An evidence-based approach, which is proposed by Ofsted, is contradicted by a ‘memorandum of understanding’ signed in July 2025 by the Department of Science, Innovation and Technology (DSIT) and OpenAI, the provider of ChatGPT. This followed similar agreements with Anthropic and Google, with promises that OpenAI will work with DSIT to find areas where ‘advanced AI models’ can be deployed. Though Education Secretary Brigitte Phillipson is relatively controlled in her enthusiasm for AI, a large pilot programme is being rolled out to over 1000 schools in England. One of the uses cited is that homework could be marked and graded by a machine. AI announcements, as ever, lurch from the grandiose to the irredeemably prosaic. 

Whatever the government decides, AI technology is already affecting how students approach tasks. It is one thing to use AI (or some automated, non-AI systems, which have, of course, been optional for some time now) to mark certain homework. But what about when students are asked to produce essays and projects of their own, containing routine and non-routine tasks outside of a controlled environment? Once teachers’ feedback is automated, much of the education system could effectively be reduced to different large language models squawking ineffectively at each other.

Documenting the existing impacts of AI on education is an urgent task. There are a number of hypotheses, but usually the sample sizes and complexity of the existing studies do not bear too much weight. Some studies suggest that by using ChatGPT, students may develop a degree of reliance, and when faced with exam conditions, will be unable to respond effectively. Going still further, a couple of studies have pointed to critical thinking being ‘offloaded’ to large language models and potentially resulting in ‘metacognitive laziness’. In contrast, other studies have pointed to AI tutoring as yielding hugely impressive learning outcomes. 

Evidence-based responses to the existing challenges of AI usage should not preclude potential recommendations that might make student use of AI less rewarding, by moving tasks away from computers or by establishing offline environments for completion of tasks. Research into the existing laissez-faire situation seems to be less of a priority than the novel uses of AI, which would seem to be an interesting aspect of how the Starmer administration (and, to be fair, the Rishi Sunak administration) has handled potential conflicts with Big Tech companies — by pre-emptive capitulation, in almost all cases. 

It is difficult to take inflated claims for AI in education at face value. We might look for other motivations, and some of these may be ideological. The Trump administration is extremely keen on AI in American schools, and it is the only education initiative of any note that it is currently pursuing. AI, say Silicon Valley entrepreneurs, can be used to replace teachers, or at least downgrade them as a unionised profession. Bill Gates is a leading advocate for replacing teachers with AI. Partly, this is a reflection of a ‘tech’ mindset, where everything, including the often messy and all-too-human processes of education for children and adolescents, can be somehow made cleaner, more efficient. For the most part, however, AI in education is about business: companies embedding themselves into public education funding as vendors and suppliers. 

Notably, surveillance is a critical part of the approach adopted by Alpha School, with webcams being used to monitor children at all times. AI will increasingly want to see who is asking questions, and from this, deduce the emotional state of the human. This points to further developments of devices which will end up in educational settings, possibly initially marketed as a ‘pedagogical assistant’ of some sort, potentially in combination with Augmented Reality devices, such as the ‘smart glasses’ being advocated by Meta’s CEO, Mark Zuckerberg. It could be pendants that act as monitoring and intervention devices. These would potentially also capture and analyse individual speech and other learning interactions. 

Looming over the discussion of all of this is a nexus of power and money. So much of the education of children relies upon the Microsoft Corporation, which already has huge power over various parts of the UK’s economic and social activities, the truth is this: if Microsoft Corporation wish to roll out different AI functionalities to students of any age, they will do so — and this will accompany pressures on educators to demonstrate their usage of this functionality. And yes, Microsoft is already the host, providing the container in which UK schools put all their words, images, and dreams. 

Therefore, it is only a more fundamental, public questioning of the role of technology in education, and in particular, the role of a monopoly provider suspected of ‘vendor capture’ that would begin to address the root causes. It is perhaps not surprising that it is proving so difficult to assess the existing impact of AI on education when we have no clear, authoritative conclusions on the impact of providing children with tablets and laptops, a plan which originated in the last century. We really have no idea, in particular, how this ostensibly progressive and unimpeachable generosity might have impacted upon, for example, boys’ reading abilities, for good or ill. 

In 2025, the Netflix series ‘Adolescence’ became a political class talking point, with even Starmer talking emotionally about how, as a father, it reminded him to communicate more with his children. Despite the more cautious tone of the UK’s Education Secretary, Starmer’s government has shown manic enthusiasm in its commitment to prioritise a technology that will encourage teenagers to form much closer and even intimate connections with bots. This may really be quite simple: technology companies started running away with education some time ago, and it may be hard to get it back. Now might be a good time to start.Email