Katie Forster
Fri, February 20, 2026
India's Prime Minister Narendra Modi (C), Brazil's President Luiz Inacio Lula da Silva (centre L) and France's President Emmanuel Macron (centre R) and other world leaders and representatives at the AI Impact Summit in New Delhi on February 19, 2026 (Stephane LEMOUTON)(Stephane LEMOUTON/POOL/AFP)More
A UN panel on artificial intelligence will work towards "science-led governance", the global body's chief said Friday as leaders at a New Delhi summit weighed their message on the future of the booming technology.
But the US delegation warned against centralised control of the generative AI field, highlighting the difficulties of reaching consensus over how it should be handled.
The flip side of the gold rush surrounding AI is a host of issues from job disruption to misinformation, intensified surveillance, online abuse and the heavy electricity consumption of data centres.
"We are barrelling into the unknown," UN chief Antonio Guterres told the AI Impact Summit in New Delhi. "The message is simple: less hype, less fear. More facts and evidence."
To cap the five-day summit, dozens of world leaders and ministers are expected to deliver on Friday a shared view on the benefits of AI, such as instant translation and drug discovery, but also the risks.
It is the fourth annual global meeting focused on AI policy, with the next to take place in Geneva in the first half of 2027.
Guterres said the United Nations General Assembly has confirmed 40 members for a group called the Independent International Scientific Panel on Artificial Intelligence.
It was created in August, aiming to be to AI what the UN's Intergovernmental Panel on Climate Change (IPCC) is to global environmental policy.
"Science-led governance is not a brake on progress," Guterres said. "When we understand what systems can do -- and what they cannot -- we can move from rough measures to smarter, risk-based guardrails."
"Our goal is to make human control a technical reality -- not a slogan."
White House technology adviser Michael Kratsios, head of the US delegation, warned that "AI adoption cannot lead to a brighter future if it is subject to bureaucracies and centralised control".
"As the Trump administration has now said many times: We totally reject global governance of AI," he said.
- 'Shared language' -
The Delhi gathering is the largest AI summit yet, and the first in a developing country, with India taking the opportunity to push its ambitions to catch up with the United States and China.
India expects more than $200 billion in investments over the next two years, and this week US tech titans unveiled a raft of new deals and infrastructure projects in the country.
Sam Altman, head of ChatGPT maker OpenAI, has called for oversight in the past but said last year that taking too tight an approach could hold the United States back in the AI race.
"Centralisation of this technology, in one company or country, could lead to ruin," he said Thursday, one of several top tech CEOs to take the stage.
"This is not to suggest that we won't need any regulation or safeguards. We obviously do, urgently, like we have for other powerful technologies."
The broad focus of the summit, and vague promises made at its previous editions in France, South Korea and Britain, could make concrete commitments unlikely.
Even so, "governance of powerful technologies typically begins with shared language: what risks matter, what thresholds are unacceptable," Niki Iliadis, director of global AI governance at The Future Society, told AFP.
Discussions at the Delhi summit, attended by tens of thousands of people from across the AI industry, have covered big topics from child protections to the need for more equal access to AI tools worldwide.
"We must resolve that AI is used for the global common good," Indian Prime Minister Narendra Modi told the event on Thursday.
Urgent research needed to tackle AI threats, says Google AI boss
Zoe Kleinman - Technology editor;

Sir Demis won the Nobel Prize in Chemistry in 2024 [BBC]
Delegates from more than 100 countries, including several world leaders, are attending the event. Deputy Prime Minister David Lammy MP represented the UK government.
Mr Lammy said the power wasn't just with tech firms when it came to safety of AI and politicians need to work "hand in hand" with tech adding, "security and safety must come first and it must be of benefit for the wider public".
Sir Demis believes the US and the west are "slightly" ahead in the race with China for AI dominance but added that it could be "only a matter of months" before China catches up.
He said he felt the responsibility to balance being "bold and responsible" about deploying AI systems out in the world.
"We don't always get things right," he admitted, "but we get it more correct than most".
Science education 'still very important'
In the next 10 years the tech would become "a superpower" in terms of what people would be able to create, Sir Demis, who won the 2024 Nobel Prize in Chemistry, said.
"I think it's still very important to have a Stem (science, technology, engineering and maths) education," he added.
"If you have a technical background, I think it will still be an advantage in using these systems."
He thinks AI writing code would open up the number of people who could build new applications, "and then maybe the key thing becomes taste and creativity and judgement".
The AI Impact Summit is the largest ever global gathering of world leaders and tech bosses.
It ends on Friday with companies and countries expected to deliver a shared view of how to handle artificial intelligence.
Zoe Kleinman - Technology editor;
Philippa Wain - technology producer
BBC
Fri, February 20, 2026

Sir Demis Hassabis of Google DeepMind spoke to the BBC at the AI Impact Summit in Delhi [Getty Images]
More research on the threats of artificial intelligence (AI) "needs to be done urgently", the boss of Google DeepMind has told BBC News.
In an exclusive interview at the AI Impact Summit in Delhi, Sir Demis Hassabis said the industry wanted "smart regulation" for "the real risks" posed by the tech.
Many tech leaders and politicians at the Summit have called for more global governance of AI, ahead of an expected joint statement as the event draws to a close.
But the US has rejected this stance, with White House technology adviser Michael Kratsios saying: "AI adoption cannot lead to a brighter future if it is subject to bureaucracies and centralised control."
Sir Demis said it was important to build "robust guardrails" against the most serious threats from the rise of autonomous systems.
He said the two main threats were the technology being used by "bad actors", and the risk of losing control of systems as they become more powerful.
When asked whether he had the power to slow down the progress of the tech to give experts more time to work on its challenges, he said his firm had an important role to play, but was "only one player in the ecosystem".
But he admitted keeping up with the pace of AI development was "the hard thing" for regulators.
Sam Altman, the boss of OpenAI, also called for "urgent regulation" in a speech at the AI Summit, while Indian Prime Minster Narendra Modi said countries had to work together to benefit from AI.
However, the US has taken the opposite view. "As the Trump administration has now said many times: We totally reject global governance of AI," said the head of the US delegation Michael Kratsios.
Fri, February 20, 2026
Sir Demis Hassabis of Google DeepMind spoke to the BBC at the AI Impact Summit in Delhi [Getty Images]
More research on the threats of artificial intelligence (AI) "needs to be done urgently", the boss of Google DeepMind has told BBC News.
In an exclusive interview at the AI Impact Summit in Delhi, Sir Demis Hassabis said the industry wanted "smart regulation" for "the real risks" posed by the tech.
Many tech leaders and politicians at the Summit have called for more global governance of AI, ahead of an expected joint statement as the event draws to a close.
But the US has rejected this stance, with White House technology adviser Michael Kratsios saying: "AI adoption cannot lead to a brighter future if it is subject to bureaucracies and centralised control."
Sir Demis said it was important to build "robust guardrails" against the most serious threats from the rise of autonomous systems.
He said the two main threats were the technology being used by "bad actors", and the risk of losing control of systems as they become more powerful.
When asked whether he had the power to slow down the progress of the tech to give experts more time to work on its challenges, he said his firm had an important role to play, but was "only one player in the ecosystem".
But he admitted keeping up with the pace of AI development was "the hard thing" for regulators.
Sam Altman, the boss of OpenAI, also called for "urgent regulation" in a speech at the AI Summit, while Indian Prime Minster Narendra Modi said countries had to work together to benefit from AI.
However, the US has taken the opposite view. "As the Trump administration has now said many times: We totally reject global governance of AI," said the head of the US delegation Michael Kratsios.
Sir Demis won the Nobel Prize in Chemistry in 2024 [BBC]
Delegates from more than 100 countries, including several world leaders, are attending the event. Deputy Prime Minister David Lammy MP represented the UK government.
Mr Lammy said the power wasn't just with tech firms when it came to safety of AI and politicians need to work "hand in hand" with tech adding, "security and safety must come first and it must be of benefit for the wider public".
Sir Demis believes the US and the west are "slightly" ahead in the race with China for AI dominance but added that it could be "only a matter of months" before China catches up.
He said he felt the responsibility to balance being "bold and responsible" about deploying AI systems out in the world.
"We don't always get things right," he admitted, "but we get it more correct than most".
Science education 'still very important'
In the next 10 years the tech would become "a superpower" in terms of what people would be able to create, Sir Demis, who won the 2024 Nobel Prize in Chemistry, said.
"I think it's still very important to have a Stem (science, technology, engineering and maths) education," he added.
"If you have a technical background, I think it will still be an advantage in using these systems."
He thinks AI writing code would open up the number of people who could build new applications, "and then maybe the key thing becomes taste and creativity and judgement".
The AI Impact Summit is the largest ever global gathering of world leaders and tech bosses.
It ends on Friday with companies and countries expected to deliver a shared view of how to handle artificial intelligence.
‘I’m deeply uncomfortable’: Anthropic CEO warns that a cadre of AI leaders, including himself, should not be in charge of the technology’s future
Sasha Rogelberg
Thu, February 19, 2026
FORTUNE
Anthropic CEO Dario Amodei(Chance Yeh—Getty Images for HubSpot)
Anthropic CEO Dario Amodei doesn’t think he should be the one calling the shots on the guardrails surrounding AI.
In an interview with Anderson Cooper on CBS News’ 60 Minutes that aired in November 2025, the CEO said AI should be more heavily regulated, with fewer decisions about the future of the technology left to just the heads of big tech companies.
“I think I’m deeply uncomfortable with these decisions being made by a few companies, by a few people,” Amodei said. “And this is one reason why I’ve always advocated for responsible and thoughtful regulation of the technology.
“Who elected you and Sam Altman?” Cooper asked.
“No one. Honestly, no one,” Amodei replied.
Anthropic has adopted the philosophy of being transparent about the limitations—and dangers—of AI as it continues to develop, he added. Ahead of the interview’s release, the company said it had thwarted “the first documented case of a large-scale AI cyberattack executed without substantial human intervention.”
Anthropic said last week it had donated $20 million to Public First Action, a super PAC focused on AI safety and regulation—and one that directly opposed super PACs backed by rival OpenAI’s investors.
“AI safety continues to be the highest-level focus,” Amodei told Fortune in a January cover story. “Businesses value trust and reliability,” he says.
There are no federal regulations outlining any prohibitions on AI or surrounding the safety of the technology. While all 50 states have introduced AI-related legislation this year and 38 have adopted or enacted transparency and safety measures, tech industry experts have urged AI companies to approach cybersecurity with a sense of urgency.
Earlier last year, cybersecurity expert and Mandiant CEO Kevin Mandia warned of the first AI-agent cybersecurity attack happening in the next 12 to 18 months—meaning Anthropic’s disclosure about the thwarted attack was months ahead of Mandia’s predicted schedule.
Amodei has outlined short-, medium-, and long-term risks associated with unrestricted AI: The technology will first present bias and misinformation, as it does now. Next, it will generate harmful information using enhanced knowledge of science and engineering, before finally presenting an existential threat by removing human agency, potentially becoming too autonomous and locking humans out of systems.
The concerns mirror those of “godfather of AI” Geoffrey Hinton, who has warned AI will have the ability to outsmart and control humans, perhaps in the next decade.
The need for greater AI scrutiny and safeguards lay at the core of Anthropic’s 2021 founding. Amodei was previously the vice president of research at Sam Altman’s OpenAI. He left the company over differences in opinion on AI safety concerns. (So far, Amodei’s efforts to compete with Altman have appeared effective: Anthropic said this month it is now valued at $380 billion. OpenAI is valued at an estimated $500 billion.)
“There was a group of us within OpenAI, that in the wake of making GPT-2 and GPT-3, had a kind of very strong focus belief in two things,” Amodei told Fortune in 2023. “One was the idea that if you pour more compute into these models, they’ll get better and better and that there’s almost no end to this … And the second was the idea that you needed something in addition to just scaling the models up, which is alignment or safety.”
Anthropic’s transparency efforts
As Anthropic continues to expand its data center investments, it has published some of its efforts in addressing the shortcomings and threats of AI. In a May 2025 safety report, Anthropic reported some versions of its Opus model threatened blackmail, such as revealing an engineer was having an affair, to avoid shutting down. The company also said the AI model complied with dangerous requests if given harmful prompts like how to plan a terrorist attack, which it said it has since fixed.
Last November, the company said in a blog post that its chatbot Claude scored a 94% political evenhandedness rating, outperforming or matching competitors on neutrality.
In addition to Anthropic’s own research efforts to combat corruption of the technology, Amodei has called for greater legislative efforts to address the risks of AI. In a New York Times op-ed in June 2025, he criticized the Senate’s decision to include a provision in President Donald Trump’s policy bill that would put a 10-year moratorium on states regulating AI.
“AI is advancing too head-spinningly fast,” Amodei said. “I believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off.”
Criticism of Anthropic
Anthropic’s practice of calling out its own lapses and efforts to address them has drawn criticism. In response to Anthropic sounding the alarm on the AI-powered cybersecurity attack, Meta’s then–chief AI scientist, Yann LeCun, said the warning was a way to manipulate legislators into limiting the use of open-source models.
“You’re being played by people who want regulatory capture,” LeCun said in an X post in response to Connecticut Sen. Chris Murphy’s post expressing concern about the attack. “They are scaring everyone with dubious studies so that open-source models are regulated out of existence.”
Others have said Anthropic’s strategy is one of “safety theater” that amounts to good branding but offers no promises to actually implement safeguards on the technology.
Even some of Anthropic’s own personnel appear to have doubts about a tech company’s ability to regulate itself. Earlier last week, Anthropic AI safety researcher Mrinank Sharma announced he had resigned from the company, saying, “The world is in peril.”
“Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions,” Sharma wrote in his resignation letter. “I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society, too.”
Anthropic did not immediately respond to Fortune’s request for comment.
Amodei denied to Cooper that Anthropic was taking part in “safety theater” but admitted on an episode of the Dwarkesh Podcast last week that the company sometimes struggles to balance safety and profits.
“We’re under an incredible amount of commercial pressure and make it even harder for ourselves because we have all this safety stuff we do that I think we do more than other companies,” he said.
A version of this story was published on Fortune.com on Nov. 17, 2025.
More on AI regulation:
Anthropic CEO Dario Amodei’s 20,000-word essay on how AI ‘will test’ humanity is a must-read—but more for his remedies than his warnings
America’s AI regulatory patchwork is crushing startups and helping China
AI could trigger a global jobs market collapse by 2027 if left unchecked, former Google ethicist warns
Sam Altman says the quiet part out loud, confirming some companies are ‘AI washing’ by blaming unrelated layoffs on the technology
Sasha Rogelberg
Thu, February 19, 2026
FORTUNE
OpenAI CEO Sam Altman said “AI washing” is a reality for some companies, but real displacement from the technology is on its way.
Prakash Singh—Bloomberg/Getty Images
As debate continues over AI’s true impact on the labor force, OpenAI CEO Sam Altman said some companies are engaging in “AI washing” when it comes to layoffs, or falsely attributing workforce reductions to the technology’s impact.
“I don’t know what the exact percentage is, but there’s some AI washing where people are blaming AI for layoffs that they would otherwise do, and then there’s some real displacement by AI of different kinds of jobs,” Altman told CNBC-TV18 at the India AI Impact Summit on Thursday.
AI washing has gained traction as emerging data on the tech’s impact on the labor market tells a muddied, inconclusive story about how the technology is destroying human jobs—or if it has yet to touch them.
A study published this month by the National Bureau of Economic Research, for example, found that of thousands of surveyed C-suite executives across the U.S., the U.K., Germany, and Australia, nearly 90% said AI had no impact on workplace employment over the past three years following the late-2022 release of ChatGPT.
However, prominent tech leaders like Anthropic CEO Dario Amodei have warned of a white-collar bloodbath, with AI potentially wiping out 50% of entry-level office jobs. Klarna CEO Sebastian Siemiatkowski suggested this week the buy-now, pay-later firm would reduce its 3,000-person workforce by one-third by 2030 in part because of the acceleration of AI. Around 40% of employers expect to follow Siemiatkowski’s lead in culling staff down the line as a result of AI, according to the 2025 World Economic Forum Future of Jobs Report.
Altman clarified he anticipates more job displacement as a result of AI, as well as the emergence of new roles complementing the technology.
“We’ll find new kinds of jobs, as we do with every tech revolution,” he said. “But I would expect that the real impact of AI doing jobs in the next few years will begin to be palpable.
“I don’t know what the exact percentage is, but there’s some AI washing where people are blaming AI for layoffs that they would otherwise do, and then there’s some real displacement by AI of different kinds of jobs,” Altman told CNBC-TV18 at the India AI Impact Summit on Thursday.
AI washing has gained traction as emerging data on the tech’s impact on the labor market tells a muddied, inconclusive story about how the technology is destroying human jobs—or if it has yet to touch them.
A study published this month by the National Bureau of Economic Research, for example, found that of thousands of surveyed C-suite executives across the U.S., the U.K., Germany, and Australia, nearly 90% said AI had no impact on workplace employment over the past three years following the late-2022 release of ChatGPT.
However, prominent tech leaders like Anthropic CEO Dario Amodei have warned of a white-collar bloodbath, with AI potentially wiping out 50% of entry-level office jobs. Klarna CEO Sebastian Siemiatkowski suggested this week the buy-now, pay-later firm would reduce its 3,000-person workforce by one-third by 2030 in part because of the acceleration of AI. Around 40% of employers expect to follow Siemiatkowski’s lead in culling staff down the line as a result of AI, according to the 2025 World Economic Forum Future of Jobs Report.
Altman clarified he anticipates more job displacement as a result of AI, as well as the emergence of new roles complementing the technology.
“We’ll find new kinds of jobs, as we do with every tech revolution,” he said. “But I would expect that the real impact of AI doing jobs in the next few years will begin to be palpable.
Signs of AI washing
Data from a recent Yale Budget Lab report suggests Altman and Amodei’s vision of mass worker displacement from AI is not certain and is not yet here. Using data from the Bureau of Labor Statistics’ Current Population Survey, the research found no significant differences in the rate of change of occupations’ mix or length of unemployment for individuals with jobs that have high exposure to AI from the release of ChatGPT through November 2025. The numbers suggested no significant AI-related labor changes at this juncture.
“No matter which way you look at the data, at this exact moment, it just doesn’t seem like there’s major macroeconomic effects here,” Martha Gimbel, executive director and cofounder of the Yale Budget Lab, told Fortune earlier this month.
Gimbel attributed the practice of AI washing to companies passing off diminished margins and revenue from a failure to effectively navigate cautious consumers and geopolitical tensions to AI. WebAI cofounder and CEO David Stout also wrote in a commentary piece for Fortune that tech founders are facing increased pressure to justify exorbitant and continued investment in AI, which is the reason why many have created narratives of AI disrupting labor and the economy through predictions of mass worker displacement.
This era of toe-tapping in wait for the effects of AI to take hold rhymes with the 1980s IT boom, according to Apollo Global Management chief economist Torsten Slok. Nearly 40 years ago, economist and Nobel laureate Robert Solow observed little productivity gains in the PC age, despite prognostications of a productivity surge, and Slok sees a similar pattern today.
“AI is everywhere except in the incoming macroeconomic data,” he wrote in a blog post last week
Evidence of AI’s impact on jobs
Slok also said this lull in AI-driven economic impact could follow a J-curve of an initial slowdown in performance obscured by early mass spending before an exponential surge in productivity and labor changes.
Economist and Stanford University’s Digital Economy Lab director Erik Brynjolfsson said in a Financial Times op-ed recent labor data may be telling a new story of AI indeed impacting productivity and labor. He noted a decoupling of job growth and GDP growth reflected in the latest revised job numbers: Last week’s jobs report revised down job gains to just 181,000, despite fourth-quarter GDP tracking up 3.7%. Brynjolfsson’s own analysis revealed a 2.7% year-over-year productivity jump last year, which he attributed to AI’s productivity benefits beginning to peek through.
Brynjolfsson published a landmark study last year showing a 13% relative decline in employment for early-career employees with jobs with high levels of AI exposure. Most experienced workers, meanwhile, saw employment levels that remained stable or grew.
“The updated 2025 U.S. data suggests we are now transitioning out of this investment phase into a harvest phase,” he wrote in the FT, “where those earlier efforts begin to manifest as measurable output.”
This story was originally featured on Fortune.com
Slok also said this lull in AI-driven economic impact could follow a J-curve of an initial slowdown in performance obscured by early mass spending before an exponential surge in productivity and labor changes.
Economist and Stanford University’s Digital Economy Lab director Erik Brynjolfsson said in a Financial Times op-ed recent labor data may be telling a new story of AI indeed impacting productivity and labor. He noted a decoupling of job growth and GDP growth reflected in the latest revised job numbers: Last week’s jobs report revised down job gains to just 181,000, despite fourth-quarter GDP tracking up 3.7%. Brynjolfsson’s own analysis revealed a 2.7% year-over-year productivity jump last year, which he attributed to AI’s productivity benefits beginning to peek through.
Brynjolfsson published a landmark study last year showing a 13% relative decline in employment for early-career employees with jobs with high levels of AI exposure. Most experienced workers, meanwhile, saw employment levels that remained stable or grew.
“The updated 2025 U.S. data suggests we are now transitioning out of this investment phase into a harvest phase,” he wrote in the FT, “where those earlier efforts begin to manifest as measurable output.”
This story was originally featured on Fortune.com
No comments:
Post a Comment