A top robotics executive at OpenAI said Saturday she had resigned over the company’s deal with the US Department of Defence to allow its artificial intelligence to be used for war and potential domestic surveillance.
Issued on: 08/03/2026 -
By: FRANCE 24

OpenAI logo is seen in this illustration taken May 20, 2024. © Dado Ruvic, Reuters
OpenAI's top robotics executive said Saturday she had resigned over the artificial intelligence giant's deal with the US government to allow its technology's deployment for war and domestic surveillance.
The company behind ChatGPT secured a defence contract with the Pentagon last month, hours after rival Anthropic refused to agree to unconditional military use of their technology.
READ MOREOpenAI secures Pentagon deal with safety safeguards as Trump drops Anthropic
OpenAI's CEO Sam Altman later posted to X saying the startup would be modifying a contract so its models would not be used for "domestic surveillance of US persons and nationals", after criticism it was giving too much power to military officials without oversight.
Caitlin Kalinowski said she cared deeply about "the Robotics team and the work we built together", but that "surveillance of Americans without judicial oversight and lethal autonomy without human authorisation are lines that deserved more deliberation than they got".
"This was about principle, not people," she wrote in a post on X.
Kalinowski wrote in a followup post that she took issue with the haste of OpenAI's Pentagon deal.
"To be clear, my issue is that the announcement was rushed without the guardrails defined," she wrote.
"It's a governance concern first and foremost. These are too important for deals or announcements to be rushed."
Anthropic's refusal to authorise use of its Claude AI models had prompted backlash from US officials.
Kalinowski previously worked at Meta, developing their augmented reality glasses.
(FRANCE 24 with AFP)
OpenAI's top robotics executive said Saturday she had resigned over the artificial intelligence giant's deal with the US government to allow its technology's deployment for war and domestic surveillance.
The company behind ChatGPT secured a defence contract with the Pentagon last month, hours after rival Anthropic refused to agree to unconditional military use of their technology.
READ MOREOpenAI secures Pentagon deal with safety safeguards as Trump drops Anthropic
OpenAI's CEO Sam Altman later posted to X saying the startup would be modifying a contract so its models would not be used for "domestic surveillance of US persons and nationals", after criticism it was giving too much power to military officials without oversight.
Caitlin Kalinowski said she cared deeply about "the Robotics team and the work we built together", but that "surveillance of Americans without judicial oversight and lethal autonomy without human authorisation are lines that deserved more deliberation than they got".
"This was about principle, not people," she wrote in a post on X.
Kalinowski wrote in a followup post that she took issue with the haste of OpenAI's Pentagon deal.
"To be clear, my issue is that the announcement was rushed without the guardrails defined," she wrote.
"It's a governance concern first and foremost. These are too important for deals or announcements to be rushed."
Anthropic's refusal to authorise use of its Claude AI models had prompted backlash from US officials.
Kalinowski previously worked at Meta, developing their augmented reality glasses.
(FRANCE 24 with AFP)
Anthropic vows court fight in Pentagon row
By AFP
March 8, 2026

US tech giant Anthropic will bar Chinese-linked users from its artificial intelligence services. — © AFP Julie JAMMOT
Anthropic chief executive Dario Amodei has said the company has “no choice” but to challenge in court the Pentagon’s formal designation of the artificial intelligence firm as a risk to US national security.
The CEO, writing in a blog post on Thursday, insisted however that the ruling’s practical scope is narrower than initially suggested, signaling that the designation would not have a catastrophic effect on the company.
Amodei said the Department of War — the name preferred by the Trump administration for the Department of Defense — confirmed in a letter that Anthropic and its products, including its widely-used Claude AI model, have been deemed a supply chain risk.
It is the first time a US company has ever been publicly given such a designation, a label typically reserved for organizations from foreign adversary countries, like Chinese tech company Huawei.
Amodei, in his blog post, said the company disputes the legal basis of the action but sought to reassure customers.
“It plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts,” he wrote.
The designation will require defense vendors and contractors to certify that they don’t use Anthropic’s models in their work with the Pentagon.

Anthropic chief executive Dario Amodei said the company disputes the legal basis of the action – Copyright AFP/File FABRICE COFFRINI
But Amodei argued that under the relevant statute, the intention is “to protect the government rather than to punish a supplier” and requires the Department of Defense to use “the least restrictive means necessary.”
Microsoft, one of Anthropic’s biggest partners, agreed with that reading, telling US media its lawyers studied the designation and concluded that Anthropic products, including Claude, can remain available to its customers other than the Department of War.
– ‘Sloppy’ –
The dispute erupted after Anthropic infuriated Pentagon chief Pete Hegseth by insisting its technology should not be used for mass surveillance or fully autonomous weapons systems.
Washington hit back, saying the Pentagon operates within the law and that contracted suppliers cannot dictate terms on how their products are used.
Amodei also used the statement to apologize for an internal company memo leaked to the press this week, in which he told staff the actions against the company were politically motivated.
“The real reasons” the Trump administration “do not like us is that we haven’t donated to Trump (while OpenAI/Greg have donated a lot),” Amodei said, referring to Greg Brockman, the president of ChatGPT-maker OpenAI, who has donated $25 million to Trump.
Amodei called the memo an “out-of-date assessment of the current situation,” written under duress on a day that saw his company under extreme pressure from the government.
OpenAI initially swooped in to replace Anthropic in its contract with the US military, but that move backfired when senior OpenAI staff expressed discomfort with the deal.
OpenAI CEO Sam Altman later said the deal was “sloppy” and that he was working to revise it.
The standoff with the Pentagon has had some silver lining for Anthropic, which was founded in 2021 by former staffers of OpenAI, with a focus on AI safety.
The conflict has helped propel the Claude app to the top of download rankings on Apple and Google smartphones.
Anthropic also indicated to AFP that the number of paying users of its Claude model had doubled since the beginning of the year and that its app is currently downloaded more than a million times a day.
By AFP
March 8, 2026

US tech giant Anthropic will bar Chinese-linked users from its artificial intelligence services. — © AFP Julie JAMMOT
Anthropic chief executive Dario Amodei has said the company has “no choice” but to challenge in court the Pentagon’s formal designation of the artificial intelligence firm as a risk to US national security.
The CEO, writing in a blog post on Thursday, insisted however that the ruling’s practical scope is narrower than initially suggested, signaling that the designation would not have a catastrophic effect on the company.
Amodei said the Department of War — the name preferred by the Trump administration for the Department of Defense — confirmed in a letter that Anthropic and its products, including its widely-used Claude AI model, have been deemed a supply chain risk.
It is the first time a US company has ever been publicly given such a designation, a label typically reserved for organizations from foreign adversary countries, like Chinese tech company Huawei.
Amodei, in his blog post, said the company disputes the legal basis of the action but sought to reassure customers.
“It plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts,” he wrote.
The designation will require defense vendors and contractors to certify that they don’t use Anthropic’s models in their work with the Pentagon.

Anthropic chief executive Dario Amodei said the company disputes the legal basis of the action – Copyright AFP/File FABRICE COFFRINI
But Amodei argued that under the relevant statute, the intention is “to protect the government rather than to punish a supplier” and requires the Department of Defense to use “the least restrictive means necessary.”
Microsoft, one of Anthropic’s biggest partners, agreed with that reading, telling US media its lawyers studied the designation and concluded that Anthropic products, including Claude, can remain available to its customers other than the Department of War.
– ‘Sloppy’ –
The dispute erupted after Anthropic infuriated Pentagon chief Pete Hegseth by insisting its technology should not be used for mass surveillance or fully autonomous weapons systems.
Washington hit back, saying the Pentagon operates within the law and that contracted suppliers cannot dictate terms on how their products are used.
Amodei also used the statement to apologize for an internal company memo leaked to the press this week, in which he told staff the actions against the company were politically motivated.
“The real reasons” the Trump administration “do not like us is that we haven’t donated to Trump (while OpenAI/Greg have donated a lot),” Amodei said, referring to Greg Brockman, the president of ChatGPT-maker OpenAI, who has donated $25 million to Trump.
Amodei called the memo an “out-of-date assessment of the current situation,” written under duress on a day that saw his company under extreme pressure from the government.
OpenAI initially swooped in to replace Anthropic in its contract with the US military, but that move backfired when senior OpenAI staff expressed discomfort with the deal.
OpenAI CEO Sam Altman later said the deal was “sloppy” and that he was working to revise it.
The standoff with the Pentagon has had some silver lining for Anthropic, which was founded in 2021 by former staffers of OpenAI, with a focus on AI safety.
The conflict has helped propel the Claude app to the top of download rankings on Apple and Google smartphones.
Anthropic also indicated to AFP that the number of paying users of its Claude model had doubled since the beginning of the year and that its app is currently downloaded more than a million times a day.
Questions over AI capability as tech guides Iran strikes
By AFP
March 6, 2026

Artificial intelligence tools can also be found built into semi-autonomous attack drones and other weapons - Copyright AFP ATTA KENARE
Tiphaine Le Liboux and Thomas Urbain
The latest bout of fighting between the United States, Israel and Iran has seen AI deployed as never before to sift intelligence and select targets, although the technology’s use in war remains hotly debated.
Different forms of artificial intelligence have reportedly been used to guide the Israeli campaign in Gaza and the capture of Venezuelan leader Nicolas Maduro in an American raid.
And experts believe the technology has helped select targets for the thousands of US and Israeli strikes on Iran since February 28 — although exact uses have yet to be confirmed.
Today “every military power of any significance invests hugely in military applications of AI,” said Laure de Roucy-Rochegonde of French think tank IFRI.
“Almost any military function can be boosted with AI,” from “logistics to reconnaissance, observation, information warfare, electronic warfare and cybersecurity,” she added.
AI tools can also be found built into semi-autonomous attack drones and other weapons.
But one of their best-known uses is in shortening the so-called “kill chain”, the time and decision-making between detecting a target and striking it.
US forces use the Maven Smart System (MSS) built by Palantir, which the company says can identify and prioritise potential targets.
The Washington Post reported this week that Anthropic’s Claude generative AI model has been integrated with Maven to boost the tool’s detection and simulation capabilities.
Palantir and Anthropic did not respond to AFP’s requests for comment.
AI algorithms “allow us to move much faster in handling information, and above all to be more comprehensive,” said Bertrand Rondepierre, head of the French army’s AI agency AMIAD.
The technology can sift through vast quantities of data, including “satellite images, radar, electromagnetic waves, sound, drone images and sometimes real-time video,” he added.
– Human control –
AI’s deployment in war poses a slew of moral and legal questions, notably on the extent of human control over their actions.
The debate was brought to the fore during the fighting in Gaza, where Israeli forces used a programme dubbed “Lavender” to identify targets — within a certain margin of error.
That application worked “because it covered a very limited area”, de Roucy-Rochegonde said.
Israel also has a “mass surveillance system” that could feed data about the enclave’s inhabitants into Lavender.
“It seems less likely that such a system has been set up in Iran,” she added.
“If something does go wrong, then who’s responsible?” Peter Asaro, chair of the International Committee for Robot Arms Control (ICRAC), said in an interview with AFP.
The widely reported bombing of an Iranian school — which authorities there say killed 150 people — could be a case of mistaken AI targeting, he added.
Neither the United States nor Israel has acknowledged responsibility for the strike.
AFP was unable to reach the scene of the school to verify what happened there.
But the site was close to two facilities controlled by the Islamic Revolutionary Guard Corps (IRGC), Tehran’s powerful ideological elite.
“They didn’t distinguish it from the military base as they should have, (but) who is they?” he asked — human or machine?
If AI was used, he argued that the key question is “how old was the data” used for the targeting, and whether the misdirected strike stemmed from “a database error”.
– Step by step –
Rondepierre said that AIs “operating without anyone being in control” are “science fiction”.
In France, at least, “military commanders are at the heart of the action and the design of these systems,” he insisted.
“No military decision-maker would agree to use an AI if he didn’t have trust in and control over what it’s doing,” Rondepierre added.
“They know what the risks involved are, what the capabilities of these systems are and what contexts they can use them in, with what level of trust.”
Today was just the “beginning” on use of AI by the world’s armed forces, said Benjamin Jensen of Washington-based think tank CSIS, who has taken part in tests of AI in military decision-making over the past decade.
The world’s armies “haven’t fundamentally rethought how we plan, how we conduct operations, to take advantage” of AI’s capabilities, he added.
“It’s going to take a generation for us to really figure this out.”
By AFP
March 6, 2026

Artificial intelligence tools can also be found built into semi-autonomous attack drones and other weapons - Copyright AFP ATTA KENARE
Tiphaine Le Liboux and Thomas Urbain
The latest bout of fighting between the United States, Israel and Iran has seen AI deployed as never before to sift intelligence and select targets, although the technology’s use in war remains hotly debated.
Different forms of artificial intelligence have reportedly been used to guide the Israeli campaign in Gaza and the capture of Venezuelan leader Nicolas Maduro in an American raid.
And experts believe the technology has helped select targets for the thousands of US and Israeli strikes on Iran since February 28 — although exact uses have yet to be confirmed.
Today “every military power of any significance invests hugely in military applications of AI,” said Laure de Roucy-Rochegonde of French think tank IFRI.
“Almost any military function can be boosted with AI,” from “logistics to reconnaissance, observation, information warfare, electronic warfare and cybersecurity,” she added.
AI tools can also be found built into semi-autonomous attack drones and other weapons.
But one of their best-known uses is in shortening the so-called “kill chain”, the time and decision-making between detecting a target and striking it.
US forces use the Maven Smart System (MSS) built by Palantir, which the company says can identify and prioritise potential targets.
The Washington Post reported this week that Anthropic’s Claude generative AI model has been integrated with Maven to boost the tool’s detection and simulation capabilities.
Palantir and Anthropic did not respond to AFP’s requests for comment.
AI algorithms “allow us to move much faster in handling information, and above all to be more comprehensive,” said Bertrand Rondepierre, head of the French army’s AI agency AMIAD.
The technology can sift through vast quantities of data, including “satellite images, radar, electromagnetic waves, sound, drone images and sometimes real-time video,” he added.
– Human control –
AI’s deployment in war poses a slew of moral and legal questions, notably on the extent of human control over their actions.
The debate was brought to the fore during the fighting in Gaza, where Israeli forces used a programme dubbed “Lavender” to identify targets — within a certain margin of error.
That application worked “because it covered a very limited area”, de Roucy-Rochegonde said.
Israel also has a “mass surveillance system” that could feed data about the enclave’s inhabitants into Lavender.
“It seems less likely that such a system has been set up in Iran,” she added.
“If something does go wrong, then who’s responsible?” Peter Asaro, chair of the International Committee for Robot Arms Control (ICRAC), said in an interview with AFP.
The widely reported bombing of an Iranian school — which authorities there say killed 150 people — could be a case of mistaken AI targeting, he added.
Neither the United States nor Israel has acknowledged responsibility for the strike.
AFP was unable to reach the scene of the school to verify what happened there.
But the site was close to two facilities controlled by the Islamic Revolutionary Guard Corps (IRGC), Tehran’s powerful ideological elite.
“They didn’t distinguish it from the military base as they should have, (but) who is they?” he asked — human or machine?
If AI was used, he argued that the key question is “how old was the data” used for the targeting, and whether the misdirected strike stemmed from “a database error”.
– Step by step –
Rondepierre said that AIs “operating without anyone being in control” are “science fiction”.
In France, at least, “military commanders are at the heart of the action and the design of these systems,” he insisted.
“No military decision-maker would agree to use an AI if he didn’t have trust in and control over what it’s doing,” Rondepierre added.
“They know what the risks involved are, what the capabilities of these systems are and what contexts they can use them in, with what level of trust.”
Today was just the “beginning” on use of AI by the world’s armed forces, said Benjamin Jensen of Washington-based think tank CSIS, who has taken part in tests of AI in military decision-making over the past decade.
The world’s armies “haven’t fundamentally rethought how we plan, how we conduct operations, to take advantage” of AI’s capabilities, he added.
“It’s going to take a generation for us to really figure this out.”
No comments:
Post a Comment