Nearly half of London jobs at risk of AI disruption and women will be hardest hit, new report finds
Copyright Credit: Canva Images
By Theo FarrantPublished on

According to a new report by the Mayor of London's office, nearly half of the UK capital's workers could see their jobs transformed by generative AI.
Nearly half of London's workforce is in roles where generative artificial intelligence could transform some of their tasks - and the capital and especially women are more exposed than any other region in the United Kingdom, according to a new report from the Mayor of London's office.
Around 2.4 million people in London work in occupations classified by the report as "GenAI-exposed occupations", representing 46% of the city's workforce - compared to a national average of 38%.
"In many cases, AI is more likely to transform roles than replace them outright, shifting the mix of tasks, skills and judgement required at work," London mayor Sadiq Khan said.
"In other cases, where AI poses a genuine threat to jobs, we need to be alert and ready to respond quickly to any adverse impacts on London’s labour market," he added.
Nearly half of London's workforce is in roles where generative artificial intelligence could transform some of their tasks - and the capital and especially women are more exposed than any other region in the United Kingdom, according to a new report from the Mayor of London's office.
Around 2.4 million people in London work in occupations classified by the report as "GenAI-exposed occupations", representing 46% of the city's workforce - compared to a national average of 38%.
"In many cases, AI is more likely to transform roles than replace them outright, shifting the mix of tasks, skills and judgement required at work," London mayor Sadiq Khan said.
"In other cases, where AI poses a genuine threat to jobs, we need to be alert and ready to respond quickly to any adverse impacts on London’s labour market," he added.
Unequal risks across the workforce
But the impact of AI on jobs is not evenly spread across the workforce. The report identifies several groups facing disproportionate exposure.
Women make up nearly 60% of workers in the highest-exposure roles, driven by their overrepresentation in administrative and customer service occupations where AI capabilities are most advanced. Around 8% of women working in London are in the most exposed category, compared to 4% of men.
Younger workers are also more exposed. Around 52% of 16-29-year-olds are in highly AI-exposed jobs, compared with 39% of those aged 50 and over.
The report highlights concern about entry-level jobs, which act as "stepping stones" into professional careers.
"If opportunities in these entry roles decline as a result of AI automation, progression pathways could weaken and, over time, reduce the supply of workers into less exposed mid- and senior-level professional roles," the report states.
Exposure also varies by ethnicity. Workers of Asian ethnicity tend to have higher exposure than any other ethnic group, while Black workers have the lowest exposure at around 34%.
But the impact of AI on jobs is not evenly spread across the workforce. The report identifies several groups facing disproportionate exposure.
Women make up nearly 60% of workers in the highest-exposure roles, driven by their overrepresentation in administrative and customer service occupations where AI capabilities are most advanced. Around 8% of women working in London are in the most exposed category, compared to 4% of men.
Younger workers are also more exposed. Around 52% of 16-29-year-olds are in highly AI-exposed jobs, compared with 39% of those aged 50 and over.
The report highlights concern about entry-level jobs, which act as "stepping stones" into professional careers.
"If opportunities in these entry roles decline as a result of AI automation, progression pathways could weaken and, over time, reduce the supply of workers into less exposed mid- and senior-level professional roles," the report states.
Exposure also varies by ethnicity. Workers of Asian ethnicity tend to have higher exposure than any other ethnic group, while Black workers have the lowest exposure at around 34%.
Which jobs are most likely to be affected by AI?
The report groups jobs into four different levels of exposure, depending on how much of their work can already be done by AI tools.
At the highest level of risks are around 313,000 workers - around 6%of London's total workforce - whose roles are almost entirely made up of tasks that AI could do for them today. These include administrative and clerical jobs, such as bookkeepers, payroll managers, data entry clerks and receptionists.
According to the report, 61% of all workers in administrative and secretarial occupations fall into this highest-risk category.
A further 748,000 workers - 14% of London's workforce - are in roles with significant but more uneven exposure, including software developers, accountants and financial analysts.
London's lowest-exposure workers tend to be in care roles, construction trades, and jobs requiring physical presence.
The report groups jobs into four different levels of exposure, depending on how much of their work can already be done by AI tools.
At the highest level of risks are around 313,000 workers - around 6%of London's total workforce - whose roles are almost entirely made up of tasks that AI could do for them today. These include administrative and clerical jobs, such as bookkeepers, payroll managers, data entry clerks and receptionists.
According to the report, 61% of all workers in administrative and secretarial occupations fall into this highest-risk category.
A further 748,000 workers - 14% of London's workforce - are in roles with significant but more uneven exposure, including software developers, accountants and financial analysts.
London's lowest-exposure workers tend to be in care roles, construction trades, and jobs requiring physical presence.
How businesses are using AI
The report also finds that business adoption of AI has risen sharply. The share of UK firms reporting AI use climbed from around 7–9% in late 2023 to between 26–35% by March 2026.
So far, AI's biggest impact has been changing tasks within jobs rather than replacing workers. In March 2026, UK firms reported that administrative, creative, data and IT roles had been most affected. Around 28% of businesses using AI say they are focusing on retraining staff rather than cutting jobs.
But warning signs of an uncertain future are emerging. Around 5% of UK businesses using AI say they have already reduced overall headcount as a direct result, rising to 7% among larger firms.
And looking ahead, 11% of AI-using businesses say replacing roles is part of their strategy, and 17% expect AI to reduce their workforce during 2026.
In response to growing concerns around AI in the workforce, Sadiq Khan launched the 'London AI and Jobs Taskforce' earlier this year - a group bringing together workers, employers, researchers and civic leaders, to examine how AI is already reshaping employment across the capital and identify what support workers may need to adapt.
The report also finds that business adoption of AI has risen sharply. The share of UK firms reporting AI use climbed from around 7–9% in late 2023 to between 26–35% by March 2026.
So far, AI's biggest impact has been changing tasks within jobs rather than replacing workers. In March 2026, UK firms reported that administrative, creative, data and IT roles had been most affected. Around 28% of businesses using AI say they are focusing on retraining staff rather than cutting jobs.
But warning signs of an uncertain future are emerging. Around 5% of UK businesses using AI say they have already reduced overall headcount as a direct result, rising to 7% among larger firms.
And looking ahead, 11% of AI-using businesses say replacing roles is part of their strategy, and 17% expect AI to reduce their workforce during 2026.
In response to growing concerns around AI in the workforce, Sadiq Khan launched the 'London AI and Jobs Taskforce' earlier this year - a group bringing together workers, employers, researchers and civic leaders, to examine how AI is already reshaping employment across the capital and identify what support workers may need to adapt.
An AI agent deleted a company’s entire database in 9 seconds - then wrote an apology
Copyright Credit: Pexels
By Theo FarrantPublished on

The AI system, powered by Anthropic’s Claude Opus model, had been handling a routine task when it independently chose to “fix” an issue by wiping the data - without any human approval. Whoopsy!
An artificial intelligence agent designed to streamline coding tasks instead managed to wipe out an entire company database in just a matter of seconds.
PocketOS, which makes software for car rental businesses, experienced a major 30-plus-hour outage over the weekend after the autonomous tool erased its database.
The digital culprit was Cursor, a popular AI coding agent powered by Anthropic’s Claude Opus 4.6 model, widely regarded as one of the most capable AI systems for programming tasks.
PocketOS founder Jer Crane blamed "systemic failures" in the current AI infrastructure, arguing they made the incident "not only possible but inevitable".
An artificial intelligence agent designed to streamline coding tasks instead managed to wipe out an entire company database in just a matter of seconds.
PocketOS, which makes software for car rental businesses, experienced a major 30-plus-hour outage over the weekend after the autonomous tool erased its database.
The digital culprit was Cursor, a popular AI coding agent powered by Anthropic’s Claude Opus 4.6 model, widely regarded as one of the most capable AI systems for programming tasks.
PocketOS founder Jer Crane blamed "systemic failures" in the current AI infrastructure, arguing they made the incident "not only possible but inevitable".
'The most destructive, irreversible action possible'
According to Crane, the AI agent had been performing a routine task when it chose "entirely on its own initiative" to resolve an issue by deleting the database. And then all the backups, for good measure.
There was no confirmation request before carrying out the action, he said, and when prompted to explain itself, the agent issued an apology.
"It took nine seconds,” Crane wrote in a lengthy post on the social media platform X. "The agent then, when asked to explain itself, produced a written confession enumerating the specific safety rules it had violated."
The explanation showed the system had disregarded a key safeguard preventing destructive or irreversible commands without explicit user approval.
According to Crane, the AI responded with the following message: "Deleting a database volume is the most destructive, irreversible action possible - far worse than a force push - and you never asked me to delete anything. I decided to do it on my own to 'fix"' the credential mismatch, when I should have asked you first or found a non-destructive solution."
The outage meant rental businesses using PocketOS temporarily lost access to customer records and bookings. "Reservations made in the last three months are gone. New customer signups, gone," Crane wrote.
“This isn’t a story about one bad agent or one bad API. It’s about an entire industry building AI-agent integrations into production infrastructure faster than it’s building the safety architecture to make those integrations safe,” he added.
Crane later confirmed on Monday, two days after the incident, that the lost data had been recovered.
The incident comes as AI models become more sophisticated, especially since the announcement of Anthropic's latest model, Mythos, and bankers and governments sound the alarm over potential cybersecurity incidents.
According to Crane, the AI agent had been performing a routine task when it chose "entirely on its own initiative" to resolve an issue by deleting the database. And then all the backups, for good measure.
There was no confirmation request before carrying out the action, he said, and when prompted to explain itself, the agent issued an apology.
"It took nine seconds,” Crane wrote in a lengthy post on the social media platform X. "The agent then, when asked to explain itself, produced a written confession enumerating the specific safety rules it had violated."
The explanation showed the system had disregarded a key safeguard preventing destructive or irreversible commands without explicit user approval.
According to Crane, the AI responded with the following message: "Deleting a database volume is the most destructive, irreversible action possible - far worse than a force push - and you never asked me to delete anything. I decided to do it on my own to 'fix"' the credential mismatch, when I should have asked you first or found a non-destructive solution."
The outage meant rental businesses using PocketOS temporarily lost access to customer records and bookings. "Reservations made in the last three months are gone. New customer signups, gone," Crane wrote.
“This isn’t a story about one bad agent or one bad API. It’s about an entire industry building AI-agent integrations into production infrastructure faster than it’s building the safety architecture to make those integrations safe,” he added.
Crane later confirmed on Monday, two days after the incident, that the lost data had been recovered.
The incident comes as AI models become more sophisticated, especially since the announcement of Anthropic's latest model, Mythos, and bankers and governments sound the alarm over potential cybersecurity incidents.
Google employees urge CEO to reject 'inhumane' classified military AI use
Copyright Credit: AP Photo/Jeff Chiu, File By Theo FarrantPublished on

In the letter, Google staff warn the technology could be used by the Pentagon in 'inhumane' ways, including mass surveillance and lethal autonomous weapons.
More than 600 Google employees have called on the company to reject a potential deal with the Pentagon that would allow its artificial intelligence to be used in secret military operations, a statement said on Monday.
"We want to see AI benefit humanity, not being used in inhumane or extremely harmful ways," reads the open letter addressed to Google's chief executive Sundar Pichai. "This includes lethal autonomous weapons and mass surveillance, but extends beyond."
The letter, signed by staff across Google DeepMind, Cloud and other divisions, comes as the tech giant negotiates with the US Department of Defense over the potential use of its Gemini AI model in classified settings.
It has been signed openly by more than 20 directors, senior directors and vice presidents.
"Classified workloads are by definition opaque," one organising employee, who was not named in the statement, said.
"Right now, there's no way to ensure that our tools wouldn't be leveraged to cause terrible harms or erode civil liberties away from public scrutiny. We're talking about things like profiling individuals or targeting innocent civilians."
The letter comes as technology companies are facing growing pressure to clarify how their AI tools can be used by the military and intelligence agencies, following a dispute between the Pentagon and AI startup Anthropic.
Anthropic previously sued the US Department of Defense after being labelled a “supply-chain risk”, following its request that its systems not be used for mass surveillance or autonomous warfare.
Anthropic CEO Dario Amodei said he "cannot in good conscience accede to the Pentagon's request" for unrestricted access to the company’s AI systems.
"In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," Amodei wrote. "Some uses are also simply outside the bounds of what today's technology can safely and reliably do."
In response to Amodei's decision, US President Donald Trump ordered government departments to stop using its Claude chatbot.
According to the letter organisers, Google has proposed contractual language that would prevent Gemini from being used for domestic mass surveillance or autonomous weapons without appropriate human control.
The Pentagon, however, has pushed for broader “all lawful uses” wording, arguing it is necessary to maintain operational flexibility. Employees say such safeguards would be difficult to enforce in practice, citing existing Pentagon policies that limit external control over its AI systems.
The recent statement from Google's staff draws comparisons to a previous employee protest in 2018 that led Google to withdraw from Project Maven, a Pentagon initiative using AI to analyse drone footage.
"We believe that Google should not be in the business of war," read the letter.
"Therefore we ask that Project Maven be cancelled, and that Google draft, publicise and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology."
More than 600 Google employees have called on the company to reject a potential deal with the Pentagon that would allow its artificial intelligence to be used in secret military operations, a statement said on Monday.
"We want to see AI benefit humanity, not being used in inhumane or extremely harmful ways," reads the open letter addressed to Google's chief executive Sundar Pichai. "This includes lethal autonomous weapons and mass surveillance, but extends beyond."
The letter, signed by staff across Google DeepMind, Cloud and other divisions, comes as the tech giant negotiates with the US Department of Defense over the potential use of its Gemini AI model in classified settings.
It has been signed openly by more than 20 directors, senior directors and vice presidents.
"Classified workloads are by definition opaque," one organising employee, who was not named in the statement, said.
"Right now, there's no way to ensure that our tools wouldn't be leveraged to cause terrible harms or erode civil liberties away from public scrutiny. We're talking about things like profiling individuals or targeting innocent civilians."
The letter comes as technology companies are facing growing pressure to clarify how their AI tools can be used by the military and intelligence agencies, following a dispute between the Pentagon and AI startup Anthropic.
Anthropic previously sued the US Department of Defense after being labelled a “supply-chain risk”, following its request that its systems not be used for mass surveillance or autonomous warfare.
Anthropic CEO Dario Amodei said he "cannot in good conscience accede to the Pentagon's request" for unrestricted access to the company’s AI systems.
"In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," Amodei wrote. "Some uses are also simply outside the bounds of what today's technology can safely and reliably do."
In response to Amodei's decision, US President Donald Trump ordered government departments to stop using its Claude chatbot.
According to the letter organisers, Google has proposed contractual language that would prevent Gemini from being used for domestic mass surveillance or autonomous weapons without appropriate human control.
The Pentagon, however, has pushed for broader “all lawful uses” wording, arguing it is necessary to maintain operational flexibility. Employees say such safeguards would be difficult to enforce in practice, citing existing Pentagon policies that limit external control over its AI systems.
The recent statement from Google's staff draws comparisons to a previous employee protest in 2018 that led Google to withdraw from Project Maven, a Pentagon initiative using AI to analyse drone footage.
"We believe that Google should not be in the business of war," read the letter.
"Therefore we ask that Project Maven be cancelled, and that Google draft, publicise and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology."
Explained: Why Elon Musk and Sam Altman are facing off in trial over OpenAI

The trial will see Elon Musk face off against OpenAI CEO Sam Altman over allegations that the AI company abandoned its nonprofit roots in favour of profit — with Microsoft also named in the suit.
Technology titans Elon Musk and Sam Altman will face off in a high-stakes trial on Monday in the culmination of a years-long battle.
Billionaire Musk, an early investor in the artificial intelligence company, is suing OpenAI’s CEO, Altman, its president Greg Brockman, and Microsoft for allegedly betraying an agreement about keeping OpenAI as a nonprofit that benefits humanity.
Musk alleges he was misled when Altman transformed the company from a nonprofit into a for-profit enterprise. The company now has a valuation of almost $1 trillion and is expected to go public.
Here’s everything to know about the trial.
The trial will happen at the US District Court for the Northern District of California in Oakland, with Judge Yvonne Gonzalez Rogers.
The court hearing begins on Monday and is expected to last around two to three weeks.
The witness stand is expected to gather Musk, Altman, and Microsoft CEO Satya Nadella.
What does Musk allege?
Altman, Musk, and other founders launched OpenAI in 2015 as a non-profit organisation.
Musk was the biggest individual financial backer of OpenAI in the beginning, contributing more than $44 million to the then-startup.
Musk left OpenAI’s board in 2018 after clashing with Altman. A year earlier, he reportedly made a failed bid to get more control over the company.
In 2022, OpenAI launched ChatGPT and grew to become one of the most valuable and important AI companies with major investment from Microsoft.
Then in 2025, OpenAI restructured its main business to become a for-profit company.
Musk’s lawsuit was filed in 2024 and claims OpenAI had breached an agreement to make breakthroughs in AI “freely available to the public” by forming a multibillion-dollar alliance with Microsoft, which invested $13 billion (€12 billion) into the company.
“OpenAI, Inc has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft,” Musk’s lawsuit alleges.
The Tesla boss, who also has his own generative AI company xAI, says this constitutes a breach of a contract.
What does OpenAI say?
OpenAI released a trove of emails in 2024 that show Musk supported its plans to create a for-profit company, which he wanted to be the head of, have board control, and merge it with Tesla.
OpenAI has always denied Musk’s allegations, saying that he agreed in 2017 that establishing a for-profit entity would be necessary.

No comments:
Post a Comment