Nearly half of London jobs at risk of AI disruption and women will be hardest hit, new report finds
Copyright Credit: Canva Images
By Theo FarrantPublished on

According to a new report by the Mayor of London's office, nearly half of the UK capital's workers could see their jobs transformed by generative AI.
Nearly half of London's workforce is in roles where generative artificial intelligence could transform some of their tasks - and the capital and especially women are more exposed than any other region in the United Kingdom, according to a new report from the Mayor of London's office.
Around 2.4 million people in London work in occupations classified by the report as "GenAI-exposed occupations", representing 46% of the city's workforce - compared to a national average of 38%.
"In many cases, AI is more likely to transform roles than replace them outright, shifting the mix of tasks, skills and judgement required at work," London mayor Sadiq Khan said.
"In other cases, where AI poses a genuine threat to jobs, we need to be alert and ready to respond quickly to any adverse impacts on London’s labour market," he added.
Nearly half of London's workforce is in roles where generative artificial intelligence could transform some of their tasks - and the capital and especially women are more exposed than any other region in the United Kingdom, according to a new report from the Mayor of London's office.
Around 2.4 million people in London work in occupations classified by the report as "GenAI-exposed occupations", representing 46% of the city's workforce - compared to a national average of 38%.
"In many cases, AI is more likely to transform roles than replace them outright, shifting the mix of tasks, skills and judgement required at work," London mayor Sadiq Khan said.
"In other cases, where AI poses a genuine threat to jobs, we need to be alert and ready to respond quickly to any adverse impacts on London’s labour market," he added.
Unequal risks across the workforce
But the impact of AI on jobs is not evenly spread across the workforce. The report identifies several groups facing disproportionate exposure.
Women make up nearly 60% of workers in the highest-exposure roles, driven by their overrepresentation in administrative and customer service occupations where AI capabilities are most advanced. Around 8% of women working in London are in the most exposed category, compared to 4% of men.
Younger workers are also more exposed. Around 52% of 16-29-year-olds are in highly AI-exposed jobs, compared with 39% of those aged 50 and over.
The report highlights concern about entry-level jobs, which act as "stepping stones" into professional careers.
"If opportunities in these entry roles decline as a result of AI automation, progression pathways could weaken and, over time, reduce the supply of workers into less exposed mid- and senior-level professional roles," the report states.
Exposure also varies by ethnicity. Workers of Asian ethnicity tend to have higher exposure than any other ethnic group, while Black workers have the lowest exposure at around 34%.
But the impact of AI on jobs is not evenly spread across the workforce. The report identifies several groups facing disproportionate exposure.
Women make up nearly 60% of workers in the highest-exposure roles, driven by their overrepresentation in administrative and customer service occupations where AI capabilities are most advanced. Around 8% of women working in London are in the most exposed category, compared to 4% of men.
Younger workers are also more exposed. Around 52% of 16-29-year-olds are in highly AI-exposed jobs, compared with 39% of those aged 50 and over.
The report highlights concern about entry-level jobs, which act as "stepping stones" into professional careers.
"If opportunities in these entry roles decline as a result of AI automation, progression pathways could weaken and, over time, reduce the supply of workers into less exposed mid- and senior-level professional roles," the report states.
Exposure also varies by ethnicity. Workers of Asian ethnicity tend to have higher exposure than any other ethnic group, while Black workers have the lowest exposure at around 34%.
Which jobs are most likely to be affected by AI?
The report groups jobs into four different levels of exposure, depending on how much of their work can already be done by AI tools.
At the highest level of risks are around 313,000 workers - around 6%of London's total workforce - whose roles are almost entirely made up of tasks that AI could do for them today. These include administrative and clerical jobs, such as bookkeepers, payroll managers, data entry clerks and receptionists.
According to the report, 61% of all workers in administrative and secretarial occupations fall into this highest-risk category.
A further 748,000 workers - 14% of London's workforce - are in roles with significant but more uneven exposure, including software developers, accountants and financial analysts.
London's lowest-exposure workers tend to be in care roles, construction trades, and jobs requiring physical presence.
The report groups jobs into four different levels of exposure, depending on how much of their work can already be done by AI tools.
At the highest level of risks are around 313,000 workers - around 6%of London's total workforce - whose roles are almost entirely made up of tasks that AI could do for them today. These include administrative and clerical jobs, such as bookkeepers, payroll managers, data entry clerks and receptionists.
According to the report, 61% of all workers in administrative and secretarial occupations fall into this highest-risk category.
A further 748,000 workers - 14% of London's workforce - are in roles with significant but more uneven exposure, including software developers, accountants and financial analysts.
London's lowest-exposure workers tend to be in care roles, construction trades, and jobs requiring physical presence.
How businesses are using AI
The report also finds that business adoption of AI has risen sharply. The share of UK firms reporting AI use climbed from around 7–9% in late 2023 to between 26–35% by March 2026.
So far, AI's biggest impact has been changing tasks within jobs rather than replacing workers. In March 2026, UK firms reported that administrative, creative, data and IT roles had been most affected. Around 28% of businesses using AI say they are focusing on retraining staff rather than cutting jobs.
But warning signs of an uncertain future are emerging. Around 5% of UK businesses using AI say they have already reduced overall headcount as a direct result, rising to 7% among larger firms.
And looking ahead, 11% of AI-using businesses say replacing roles is part of their strategy, and 17% expect AI to reduce their workforce during 2026.
In response to growing concerns around AI in the workforce, Sadiq Khan launched the 'London AI and Jobs Taskforce' earlier this year - a group bringing together workers, employers, researchers and civic leaders, to examine how AI is already reshaping employment across the capital and identify what support workers may need to adapt.
The report also finds that business adoption of AI has risen sharply. The share of UK firms reporting AI use climbed from around 7–9% in late 2023 to between 26–35% by March 2026.
So far, AI's biggest impact has been changing tasks within jobs rather than replacing workers. In March 2026, UK firms reported that administrative, creative, data and IT roles had been most affected. Around 28% of businesses using AI say they are focusing on retraining staff rather than cutting jobs.
But warning signs of an uncertain future are emerging. Around 5% of UK businesses using AI say they have already reduced overall headcount as a direct result, rising to 7% among larger firms.
And looking ahead, 11% of AI-using businesses say replacing roles is part of their strategy, and 17% expect AI to reduce their workforce during 2026.
In response to growing concerns around AI in the workforce, Sadiq Khan launched the 'London AI and Jobs Taskforce' earlier this year - a group bringing together workers, employers, researchers and civic leaders, to examine how AI is already reshaping employment across the capital and identify what support workers may need to adapt.
An AI agent deleted a company’s entire database in 9 seconds - then wrote an apology
Copyright Credit: Pexels
By Theo FarrantPublished on

The AI system, powered by Anthropic’s Claude Opus model, had been handling a routine task when it independently chose to “fix” an issue by wiping the data - without any human approval. Whoopsy!
An artificial intelligence agent designed to streamline coding tasks instead managed to wipe out an entire company database in just a matter of seconds.
PocketOS, which makes software for car rental businesses, experienced a major 30-plus-hour outage over the weekend after the autonomous tool erased its database.
The digital culprit was Cursor, a popular AI coding agent powered by Anthropic’s Claude Opus 4.6 model, widely regarded as one of the most capable AI systems for programming tasks.
PocketOS founder Jer Crane blamed "systemic failures" in the current AI infrastructure, arguing they made the incident "not only possible but inevitable".
An artificial intelligence agent designed to streamline coding tasks instead managed to wipe out an entire company database in just a matter of seconds.
PocketOS, which makes software for car rental businesses, experienced a major 30-plus-hour outage over the weekend after the autonomous tool erased its database.
The digital culprit was Cursor, a popular AI coding agent powered by Anthropic’s Claude Opus 4.6 model, widely regarded as one of the most capable AI systems for programming tasks.
PocketOS founder Jer Crane blamed "systemic failures" in the current AI infrastructure, arguing they made the incident "not only possible but inevitable".
'The most destructive, irreversible action possible'
According to Crane, the AI agent had been performing a routine task when it chose "entirely on its own initiative" to resolve an issue by deleting the database. And then all the backups, for good measure.
There was no confirmation request before carrying out the action, he said, and when prompted to explain itself, the agent issued an apology.
"It took nine seconds,” Crane wrote in a lengthy post on the social media platform X. "The agent then, when asked to explain itself, produced a written confession enumerating the specific safety rules it had violated."
The explanation showed the system had disregarded a key safeguard preventing destructive or irreversible commands without explicit user approval.
According to Crane, the AI responded with the following message: "Deleting a database volume is the most destructive, irreversible action possible - far worse than a force push - and you never asked me to delete anything. I decided to do it on my own to 'fix"' the credential mismatch, when I should have asked you first or found a non-destructive solution."
The outage meant rental businesses using PocketOS temporarily lost access to customer records and bookings. "Reservations made in the last three months are gone. New customer signups, gone," Crane wrote.
“This isn’t a story about one bad agent or one bad API. It’s about an entire industry building AI-agent integrations into production infrastructure faster than it’s building the safety architecture to make those integrations safe,” he added.
Crane later confirmed on Monday, two days after the incident, that the lost data had been recovered.
The incident comes as AI models become more sophisticated, especially since the announcement of Anthropic's latest model, Mythos, and bankers and governments sound the alarm over potential cybersecurity incidents.
According to Crane, the AI agent had been performing a routine task when it chose "entirely on its own initiative" to resolve an issue by deleting the database. And then all the backups, for good measure.
There was no confirmation request before carrying out the action, he said, and when prompted to explain itself, the agent issued an apology.
"It took nine seconds,” Crane wrote in a lengthy post on the social media platform X. "The agent then, when asked to explain itself, produced a written confession enumerating the specific safety rules it had violated."
The explanation showed the system had disregarded a key safeguard preventing destructive or irreversible commands without explicit user approval.
According to Crane, the AI responded with the following message: "Deleting a database volume is the most destructive, irreversible action possible - far worse than a force push - and you never asked me to delete anything. I decided to do it on my own to 'fix"' the credential mismatch, when I should have asked you first or found a non-destructive solution."
The outage meant rental businesses using PocketOS temporarily lost access to customer records and bookings. "Reservations made in the last three months are gone. New customer signups, gone," Crane wrote.
“This isn’t a story about one bad agent or one bad API. It’s about an entire industry building AI-agent integrations into production infrastructure faster than it’s building the safety architecture to make those integrations safe,” he added.
Crane later confirmed on Monday, two days after the incident, that the lost data had been recovered.
The incident comes as AI models become more sophisticated, especially since the announcement of Anthropic's latest model, Mythos, and bankers and governments sound the alarm over potential cybersecurity incidents.
Google employees urge CEO to reject 'inhumane' classified military AI use
Copyright Credit: AP Photo/Jeff Chiu, File By Theo FarrantPublished on

In the letter, Google staff warn the technology could be used by the Pentagon in 'inhumane' ways, including mass surveillance and lethal autonomous weapons.
More than 600 Google employees have called on the company to reject a potential deal with the Pentagon that would allow its artificial intelligence to be used in secret military operations, a statement said on Monday.
"We want to see AI benefit humanity, not being used in inhumane or extremely harmful ways," reads the open letter addressed to Google's chief executive Sundar Pichai. "This includes lethal autonomous weapons and mass surveillance, but extends beyond."
The letter, signed by staff across Google DeepMind, Cloud and other divisions, comes as the tech giant negotiates with the US Department of Defense over the potential use of its Gemini AI model in classified settings.
It has been signed openly by more than 20 directors, senior directors and vice presidents.
"Classified workloads are by definition opaque," one organising employee, who was not named in the statement, said.
"Right now, there's no way to ensure that our tools wouldn't be leveraged to cause terrible harms or erode civil liberties away from public scrutiny. We're talking about things like profiling individuals or targeting innocent civilians."
The letter comes as technology companies are facing growing pressure to clarify how their AI tools can be used by the military and intelligence agencies, following a dispute between the Pentagon and AI startup Anthropic.
Anthropic previously sued the US Department of Defense after being labelled a “supply-chain risk”, following its request that its systems not be used for mass surveillance or autonomous warfare.
Anthropic CEO Dario Amodei said he "cannot in good conscience accede to the Pentagon's request" for unrestricted access to the company’s AI systems.
"In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," Amodei wrote. "Some uses are also simply outside the bounds of what today's technology can safely and reliably do."
In response to Amodei's decision, US President Donald Trump ordered government departments to stop using its Claude chatbot.
According to the letter organisers, Google has proposed contractual language that would prevent Gemini from being used for domestic mass surveillance or autonomous weapons without appropriate human control.
The Pentagon, however, has pushed for broader “all lawful uses” wording, arguing it is necessary to maintain operational flexibility. Employees say such safeguards would be difficult to enforce in practice, citing existing Pentagon policies that limit external control over its AI systems.
The recent statement from Google's staff draws comparisons to a previous employee protest in 2018 that led Google to withdraw from Project Maven, a Pentagon initiative using AI to analyse drone footage.
"We believe that Google should not be in the business of war," read the letter.
"Therefore we ask that Project Maven be cancelled, and that Google draft, publicise and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology."
More than 600 Google employees have called on the company to reject a potential deal with the Pentagon that would allow its artificial intelligence to be used in secret military operations, a statement said on Monday.
"We want to see AI benefit humanity, not being used in inhumane or extremely harmful ways," reads the open letter addressed to Google's chief executive Sundar Pichai. "This includes lethal autonomous weapons and mass surveillance, but extends beyond."
The letter, signed by staff across Google DeepMind, Cloud and other divisions, comes as the tech giant negotiates with the US Department of Defense over the potential use of its Gemini AI model in classified settings.
It has been signed openly by more than 20 directors, senior directors and vice presidents.
"Classified workloads are by definition opaque," one organising employee, who was not named in the statement, said.
"Right now, there's no way to ensure that our tools wouldn't be leveraged to cause terrible harms or erode civil liberties away from public scrutiny. We're talking about things like profiling individuals or targeting innocent civilians."
The letter comes as technology companies are facing growing pressure to clarify how their AI tools can be used by the military and intelligence agencies, following a dispute between the Pentagon and AI startup Anthropic.
Anthropic previously sued the US Department of Defense after being labelled a “supply-chain risk”, following its request that its systems not be used for mass surveillance or autonomous warfare.
Anthropic CEO Dario Amodei said he "cannot in good conscience accede to the Pentagon's request" for unrestricted access to the company’s AI systems.
"In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," Amodei wrote. "Some uses are also simply outside the bounds of what today's technology can safely and reliably do."
In response to Amodei's decision, US President Donald Trump ordered government departments to stop using its Claude chatbot.
According to the letter organisers, Google has proposed contractual language that would prevent Gemini from being used for domestic mass surveillance or autonomous weapons without appropriate human control.
The Pentagon, however, has pushed for broader “all lawful uses” wording, arguing it is necessary to maintain operational flexibility. Employees say such safeguards would be difficult to enforce in practice, citing existing Pentagon policies that limit external control over its AI systems.
The recent statement from Google's staff draws comparisons to a previous employee protest in 2018 that led Google to withdraw from Project Maven, a Pentagon initiative using AI to analyse drone footage.
"We believe that Google should not be in the business of war," read the letter.
"Therefore we ask that Project Maven be cancelled, and that Google draft, publicise and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology."
Robot dogs with Elon Musk and Bezos' faces are excreting AI art at a Berlin museum

By Theo Farrant & APPublished on

Beeple says the work critiques how today’s perceptions of reality are increasingly shaped by algorithms controlled by powerful tech companies rather than artists.
Robot dogs with hyper-realistic faces of tech billionaires that crap out a piece of artificial intelligence-generated art are doing the rounds at a Berlin exhibition by the American artist Mike Winkelmann, better known as Beeple.
At the Neue Nationalgalerie, Winkelmann has installed a striking series of robotic dogs fitted with silicone heads modelled on some of the most recognisable figures in tech and culture, including Elon Musk, Mark Zuckerberg, Jeff Bezos, alongside historical figures such as Andy Warhol and Pablo Picasso and the artist himself, Beeple.
The installation, titled Regular Animals, presents the figures not as distant icons, but as restless machines wandering the gallery space - part spectacle, part satire.
Each robot is equipped with cameras that capture its surroundings and then “process” them into printed images, which are ejected in a tongue-in-cheek gesture that mimics digestion.\
Each printed image shows a snippet of reality transformed by AI to resemble the personality of the dog. So, for example, the Picasso dog poos a cubist-shaped dog, the Andy Warhol robot poos out an image in a pop art style.
According to Winkelmann, the show is a commentary on how our perceptions are shaped by algorithms and technology platforms, and the tech billionaires who own them.
"In the past our view of the world was shaped in part by how artists saw the world, how Picasso painted changed how we saw the world, how Warhol talked about consumerism, pop culture, changed how we saw those things. Now our view of the world is shaped by tech billionaires who own powerful algorithms that decide what we see and what we don't see, how much we see of it," says Winkelmann.
“That's an immense amount of power that I don’t think we’ve fully understood, especially because when they want to make a change, they don’t need to lobby the U.N. They don’t need to get something through Congress or the EU, they just wake up and change these algorithms.”
“Regular Animals” was first shown at Art Basel Miami Beach 2025.
Beeple's own background is as a graphic designer who does a variety of digital artworks.
He is one of the founders of the “everyday” movement in 3D graphics. For years, he has been creating a picture every day and posting it online without missing a single day.
The dogs also wear heads in Beeple’s own image.
Lisa Botti, the curator of the exhibition in Berlin, says that artificial intelligence was one of the phenomena most impacting our lives today and that “museums are the places where society can reflect” on such transformations, which is why she wanted to have Beeple’s work shown.
The work, entitled “Regular Animals,” was first shown at Art Basel Miami Beach 2025.
He is one of the founders of the “everyday” movement in 3D graphics. For years, he has been creating a picture every day and posting it online without missing a single day.
According to Christie's, he is the third most expensive living artist to sell at auction, after David Hockney and Jeff Koons.
Robot dogs with hyper-realistic faces of tech billionaires that crap out a piece of artificial intelligence-generated art are doing the rounds at a Berlin exhibition by the American artist Mike Winkelmann, better known as Beeple.
At the Neue Nationalgalerie, Winkelmann has installed a striking series of robotic dogs fitted with silicone heads modelled on some of the most recognisable figures in tech and culture, including Elon Musk, Mark Zuckerberg, Jeff Bezos, alongside historical figures such as Andy Warhol and Pablo Picasso and the artist himself, Beeple.
The installation, titled Regular Animals, presents the figures not as distant icons, but as restless machines wandering the gallery space - part spectacle, part satire.
Each robot is equipped with cameras that capture its surroundings and then “process” them into printed images, which are ejected in a tongue-in-cheek gesture that mimics digestion.\
Each printed image shows a snippet of reality transformed by AI to resemble the personality of the dog. So, for example, the Picasso dog poos a cubist-shaped dog, the Andy Warhol robot poos out an image in a pop art style.
According to Winkelmann, the show is a commentary on how our perceptions are shaped by algorithms and technology platforms, and the tech billionaires who own them.
"In the past our view of the world was shaped in part by how artists saw the world, how Picasso painted changed how we saw the world, how Warhol talked about consumerism, pop culture, changed how we saw those things. Now our view of the world is shaped by tech billionaires who own powerful algorithms that decide what we see and what we don't see, how much we see of it," says Winkelmann.
“That's an immense amount of power that I don’t think we’ve fully understood, especially because when they want to make a change, they don’t need to lobby the U.N. They don’t need to get something through Congress or the EU, they just wake up and change these algorithms.”
“Regular Animals” was first shown at Art Basel Miami Beach 2025.
Beeple's own background is as a graphic designer who does a variety of digital artworks.
He is one of the founders of the “everyday” movement in 3D graphics. For years, he has been creating a picture every day and posting it online without missing a single day.
The dogs also wear heads in Beeple’s own image.
Lisa Botti, the curator of the exhibition in Berlin, says that artificial intelligence was one of the phenomena most impacting our lives today and that “museums are the places where society can reflect” on such transformations, which is why she wanted to have Beeple’s work shown.
The work, entitled “Regular Animals,” was first shown at Art Basel Miami Beach 2025.
He is one of the founders of the “everyday” movement in 3D graphics. For years, he has been creating a picture every day and posting it online without missing a single day.
According to Christie's, he is the third most expensive living artist to sell at auction, after David Hockney and Jeff Koons.
‘Not OK to steal a charity’: Elon Musk testifies in legal battle with Sam Altman over OpenAI
Copyright AP Photo/Godofredo A. Vásquez By Roselyne Min with APPublished on

In his opening statement, Musk’s lawyer, Steven Molo, said Altman and Brockman, with Microsoft’s help, had taken control of a charity “whose mission was the safe, open development of artificial intelligence”. Musk is seeking damages and Altman’s removal from OpenAI’s board.
Elon Musk, Tesla’s chief executive and an early co-founder of OpenAI, took the stand on Tuesday in a high-stakes trial over his dispute with former friend Sam Altman, in a case that could affect the future direction of artificial intelligence (AI).
In 2024, Musk filed the lawsuit against Altman, OpenAI co-founder Greg Brockman and Microsoft over OpenAI’s shift away from its original non-profit structure.
“Fundamentally, I think they’re going to try to make this lawsuit ... very complicated, but it’s actually very simple,” said Musk. “Which is that it's not OK to steal a charity.”
In his opening statement, Musk’s lawyer, Steven Molo, said Altman and Brockman, with Microsoft’s help, had taken control of a charity “whose mission was the safe, open development of artificial intelligence”. Musk is seeking damages and Altman’s removal from OpenAI’s board.
The trial started on Monday at the US District Court for the Northern District of California in Oakland, with Judge Yvonne Gonzalez Rogers and is expected to take two to three weeks.
Elon Musk, Tesla’s chief executive and an early co-founder of OpenAI, took the stand on Tuesday in a high-stakes trial over his dispute with former friend Sam Altman, in a case that could affect the future direction of artificial intelligence (AI).
In 2024, Musk filed the lawsuit against Altman, OpenAI co-founder Greg Brockman and Microsoft over OpenAI’s shift away from its original non-profit structure.
“Fundamentally, I think they’re going to try to make this lawsuit ... very complicated, but it’s actually very simple,” said Musk. “Which is that it's not OK to steal a charity.”
In his opening statement, Musk’s lawyer, Steven Molo, said Altman and Brockman, with Microsoft’s help, had taken control of a charity “whose mission was the safe, open development of artificial intelligence”. Musk is seeking damages and Altman’s removal from OpenAI’s board.
The trial started on Monday at the US District Court for the Northern District of California in Oakland, with Judge Yvonne Gonzalez Rogers and is expected to take two to three weeks.
What did Musk say?
Musk was the first witness called to testify in the trial on Tuesday, with his lawyer starting off by asking about his life story.
This included details about his move, at 17, from South Africa to Canada, where for a time Musk said he worked as a lumberjack among other odd jobs, then to the US. He recounted the slew of companies he founded and runs, including SpaceX, Tesla, The Boring Company, Neuralink and others.
Asked how he has time for everything, Musk said he works 80 to 100 hours a week, doesn't take vacations and owns no vacation homes or yachts.
Molo also asked Musk about his views on AI. Musk said he expects AI to be “smarter than any human” as soon as next year. Musk said a longstanding concern about AI is the question of what happens when computers become much smarter than humans.
Comparing it to having a “very smart child,” Musk said when the child grows up “you can't control that child,” but you can instil values such as honesty, integrity and being good.
Musk recounted his version of OpenAI's founding, which he said essentially happened because of a discussion he had with Google co-founder Larry Page, who called him a “speciesist" for elevating the survival of humanity over that of AI.
The kinship between Musk and Altman was forged in 2015 when they agreed to build AI more responsibly and safely than the profit-driven companies controlled by Google's Page and Sergey Brin and Facebook founder Mark Zuckerberg, according to evidence submitted ahead of the trial.
At that time, Musk said, Google had all the money, all the computers and all the talent for AI. “There was no counterbalance.”
Musk recalled there was discussion early on about alternative sources for funding OpenAI beyond donations, and he wasn't opposed to it having a for-profit arm, but “the tail shouldn't wag the dog.” There would be a profit limit, and once artificial general intelligence, AGI, was “figured out,” the for-profit would cease to exist.
Musk was the first witness called to testify in the trial on Tuesday, with his lawyer starting off by asking about his life story.
This included details about his move, at 17, from South Africa to Canada, where for a time Musk said he worked as a lumberjack among other odd jobs, then to the US. He recounted the slew of companies he founded and runs, including SpaceX, Tesla, The Boring Company, Neuralink and others.
Asked how he has time for everything, Musk said he works 80 to 100 hours a week, doesn't take vacations and owns no vacation homes or yachts.
Molo also asked Musk about his views on AI. Musk said he expects AI to be “smarter than any human” as soon as next year. Musk said a longstanding concern about AI is the question of what happens when computers become much smarter than humans.
Comparing it to having a “very smart child,” Musk said when the child grows up “you can't control that child,” but you can instil values such as honesty, integrity and being good.
Musk recounted his version of OpenAI's founding, which he said essentially happened because of a discussion he had with Google co-founder Larry Page, who called him a “speciesist" for elevating the survival of humanity over that of AI.
The kinship between Musk and Altman was forged in 2015 when they agreed to build AI more responsibly and safely than the profit-driven companies controlled by Google's Page and Sergey Brin and Facebook founder Mark Zuckerberg, according to evidence submitted ahead of the trial.
At that time, Musk said, Google had all the money, all the computers and all the talent for AI. “There was no counterbalance.”
Musk recalled there was discussion early on about alternative sources for funding OpenAI beyond donations, and he wasn't opposed to it having a for-profit arm, but “the tail shouldn't wag the dog.” There would be a profit limit, and once artificial general intelligence, AGI, was “figured out,” the for-profit would cease to exist.
OpenAI says Musk tries to undercut its growth
OpenAI has brushed off Musk’s allegations as a case of sour grapes aimed at undercutting its rapid growth and bolstering Musk’s own xAI, which he launched in 2023 as a competitor.
In his opening statement, OpenAI lawyer William Savitt told jurors, “We are here because Mr Musk didn’t get his way with OpenAI.”
Savitt said Musk used his promises of funding to bully OpenAI founding members and tried to take control of OpenAI and merge it with Tesla. In fact, he said Musk wanted to form a for-profit company and own more than 50% of it.
There is no record, Savitt said, of promises made to Musk that OpenAI was going to remain a nonprofit forever. What Musk ultimately cared about, he said, was not OpenAI’s nonprofit status but winning the AI race with Google.
Musk's attorney said the case is not about Musk, but rather Altman, Brockman and Microsoft.
By 2017, about two years after OpenAI's founding, it became clear that OpenAI would need more money, and Molo said the founders eventually settled on the idea of creating a for-profit arm of OpenAI that would support the nonprofit. Terms were capped for investors so they “couldn't make infinite profit.”
“There is nothing wrong with a nonprofit having a for-profit subsidiary, but [it] has to advance the mission,” Molo said.
Musk is expected to continue testifying on Wednesday.
Altman is also expected to testify, along with Microsoft's chief executive, Satya Nadella.
Altman, Musk, and other founders launched OpenAI in 2015 as a non-profit organisation.
Musk was the biggest individual financial backer of OpenAI in the beginning, contributing more than $44 million (€38 million) to the then-startup.
Musk left OpenAI’s board in 2018 after clashing with Altman. A year earlier, he reportedly made a failed bid to get more control over the company.
OpenAI has brushed off Musk’s allegations as a case of sour grapes aimed at undercutting its rapid growth and bolstering Musk’s own xAI, which he launched in 2023 as a competitor.
In his opening statement, OpenAI lawyer William Savitt told jurors, “We are here because Mr Musk didn’t get his way with OpenAI.”
Savitt said Musk used his promises of funding to bully OpenAI founding members and tried to take control of OpenAI and merge it with Tesla. In fact, he said Musk wanted to form a for-profit company and own more than 50% of it.
There is no record, Savitt said, of promises made to Musk that OpenAI was going to remain a nonprofit forever. What Musk ultimately cared about, he said, was not OpenAI’s nonprofit status but winning the AI race with Google.
Musk's attorney said the case is not about Musk, but rather Altman, Brockman and Microsoft.
By 2017, about two years after OpenAI's founding, it became clear that OpenAI would need more money, and Molo said the founders eventually settled on the idea of creating a for-profit arm of OpenAI that would support the nonprofit. Terms were capped for investors so they “couldn't make infinite profit.”
“There is nothing wrong with a nonprofit having a for-profit subsidiary, but [it] has to advance the mission,” Molo said.
Musk is expected to continue testifying on Wednesday.
Altman is also expected to testify, along with Microsoft's chief executive, Satya Nadella.
Altman, Musk, and other founders launched OpenAI in 2015 as a non-profit organisation.
Musk was the biggest individual financial backer of OpenAI in the beginning, contributing more than $44 million (€38 million) to the then-startup.
Musk left OpenAI’s board in 2018 after clashing with Altman. A year earlier, he reportedly made a failed bid to get more control over the company.
Explained: Why Elon Musk and Sam Altman are facing off in trial over OpenAI

The trial will see Elon Musk face off against OpenAI CEO Sam Altman over allegations that the AI company abandoned its nonprofit roots in favour of profit — with Microsoft also named in the suit.
Technology titans Elon Musk and Sam Altman will face off in a high-stakes trial on Monday in the culmination of a years-long battle.
Billionaire Musk, an early investor in the artificial intelligence company, is suing OpenAI’s CEO, Altman, its president Greg Brockman, and Microsoft for allegedly betraying an agreement about keeping OpenAI as a nonprofit that benefits humanity.
Musk alleges he was misled when Altman transformed the company from a nonprofit into a for-profit enterprise. The company now has a valuation of almost $1 trillion and is expected to go public.
Here’s everything to know about the trial.
The trial will happen at the US District Court for the Northern District of California in Oakland, with Judge Yvonne Gonzalez Rogers.
The court hearing begins on Monday and is expected to last around two to three weeks.
The witness stand is expected to gather Musk, Altman, and Microsoft CEO Satya Nadella.
What does Musk allege?
Altman, Musk, and other founders launched OpenAI in 2015 as a non-profit organisation.
Musk was the biggest individual financial backer of OpenAI in the beginning, contributing more than $44 million to the then-startup.
Musk left OpenAI’s board in 2018 after clashing with Altman. A year earlier, he reportedly made a failed bid to get more control over the company.
In 2022, OpenAI launched ChatGPT and grew to become one of the most valuable and important AI companies with major investment from Microsoft.
Then in 2025, OpenAI restructured its main business to become a for-profit company.
Musk’s lawsuit was filed in 2024 and claims OpenAI had breached an agreement to make breakthroughs in AI “freely available to the public” by forming a multibillion-dollar alliance with Microsoft, which invested $13 billion (€12 billion) into the company.
“OpenAI, Inc has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft,” Musk’s lawsuit alleges.
The Tesla boss, who also has his own generative AI company xAI, says this constitutes a breach of a contract.
What does OpenAI say?
OpenAI released a trove of emails in 2024 that show Musk supported its plans to create a for-profit company, which he wanted to be the head of, have board control, and merge it with Tesla.
OpenAI has always denied Musk’s allegations, saying that he agreed in 2017 that establishing a for-profit entity would be necessary.












