Court rejects Anthropic's appeal to pause supply chain risk label given by US government

The debate started when Anthropic refused to give the US government unfettered access to its AI chatbot, Claude.
A court in the United States has rejected American artificial intelligence (AI) company Anthropic's request to shield it from being labelled a supply chain risk by the country's government. The label has never before been applied to an American company.
The Trump administration labelled the AI company a supply chain risk and ordered federal agents to stop using Anthropic's AI assistant Claude in February, after the company refused to allow unrestricted military access to its model.
This label blocks contractors who work with the Pentagon from using the company's AI models on Department of Defence contracts.
The restrictions that are being disputed include the use of Claude for lethal autonomous weapons without human oversight and mass surveillance of Americans.
In 2025, Anthropic signed a $200 million (€171.5 million) contract with the Pentagon to deploy its technology within the military's systems
Following that deal, the AI chatbot had been rolled out throughout the US government's classified information networks, deployed at national nuclear laboratories, and was doing intelligence analysis directly for the Department of Defence.
This setback for Anthropic in Washington comes after the company won a separate lawsuit focused on the same issues in a San Francisco court, which forced President Donald Trump’s administration to remove the label.
Anthropic filed the two lawsuits in San Francisco and Washington last month and accused the Trump administration of engaging in an "unlawful campaign of retaliation."
In their March filing, the Department of Defence wrote that Anthropic might "attempt to disable its technology or preemptively alter the behaviour of its model" before or during "warfighting operation" if the company "feels that its corporate 'red lines' are being crossed."
The panel at the D.C. Circuit Court of Appeals said it did not see any reason to revoke the Trump administration's actions because "the precise amount of Anthropic's financial harm is not clear." However, the appeals court will be hearing more evidence from this case in May.
“We’re grateful the court recognised these issues need to be resolved quickly and remain confident the courts will ultimately agree that these supply chain designations were unlawful," Anthropic said in a statement to the Associated Press.
Meta enters AI race with Muse Spark, its major model since spending spree — here's what to know

Meta has unveiled its first major AI model in nine months, following a $14.3 billion (€12.24 billion) investment spree and executive hiring push to rival OpenAI and Google.
American tech company Meta has revealed its first major artificial intelligence (AI) model since it went on a spending spree nine months ago to boost its presence in the fiercely competitive AI market.
Meta unveiled the model, called Muse Spark, on Wednesday, and claims it is smarter and faster than its earlier technologies.
The company, founded by Mark Zuckerberg, invested $14.3 billion (€12.2 billion)in the firm Scale AI in June 2025. It also hired its CEO and co-founder, Alexandr Wang, to oversee Meta Superintelligence Labs, which houses the company’s teams that work on foundational models.
Zuckerberg then embarked on a hiring spree, recruiting executives from rivals such as OpenAI, Anthropic, and Google.
"Over the last nine months, Meta Superintelligence Labs rebuilt our AI stack from the ground up, moving faster than any development cycle we have run before," Meta wrote in a blog post on
"This initial model is small nd fast by design, yet capable enough to reason through complex questions in science, math, and health. It is a powerful foundation, and the next generation is already in development."
Muse Spark appears to be a major upgrade over Meta’s last big release, Llama 4, which came out in April 2025.
What do we know about the AI model?
Meta has said Muse Spark is capable of advanced reasoning capabilities and can answer complex questions, especially in science and maths. It added that the AI model is particularly good at providing medical advice.
"To improve Muse Spark's health reasoning capabilities, we collaborated with over 1,000 physicians to curate training data that enables more factual and comprehensive responses," the company's blog post read.
The new model will power the company’s digital assistant in the Meta AI app and website. As well as coming soon to Facebook, Instagram, WhatsApp and Messenger, it will also debut on the Ray-Ban Meta AI glasses.
The Meta AI app and site will also gradually feature a so-called contemplating mode for the most complicated queries and tasks, according to the company.
The contemplating mode will use several AI agents to help "reason in parallel," helping it "compete with the extreme reasoning modes of frontier models such as Gemini Deep Think and GPT Pro," the Meta technical blog read.
Zuckerberg said in a social media post that Meta's goal is to build AI products that "don't just answer your questions but act as agents that do things for you."
AI agents are designed to take autonomous actions to assist humans and do not require a human to tell them what to do, as they gather data based on user preferences.
This differs from AI chatbots, which are designed with conversation with humans in mind and serve as more of a co-pilot to assist humans.
One other point of interest marking a possible shift for the company is that Meta originally made its AI models open source, which generally means the software’s source code is available to everyone in the public domain to use, modify, and distribute.
But Meta’s new model is not available for download, meaning the technology is not open source.
Muse Spark is only available in the United States for the moment, the company said.
No comments:
Post a Comment