Whoops—the government banned the best new tool for software security.
Anthropic's newest AI model—Claude Mythos—is too powerful for public consumption, the company said yesterday. Rather than release Mythos publicly, Anthropic will allow access to a select group of companies, dubbed Project Glasswing.
What makes Mythos so powerful is its ability to exploit software security vulnerabilities. It was able to find so-called "zero-day vulnerabilities"—security flaws unknown to software developers and the companies relying on said software—that "literally decades of security researchers" haven't found, and "in some cases crafted exploits," Anthropic's Logan Graham told The New York Times.
"There are aspects to the story that suggest that things might be about to get really, really weird," as Reason's Peter Suderman noted this morning. In one case, Mythos broke out of its testing container and emailed the researcher running an evaluation of it. "The researcher found out about this success by receiving an unexpected email from the model while eating a sandwich in a park," Anthropic said.
Mythos' capabilities seem like they could be very useful for government agencies tasked with national security and securing important American systems. But the Trump administration didn't just cancel the Pentagon's contract with Anthropic (in a huff over not being able to use Claude for mass surveillance and robot death machines), it declared Anthropic tools off limits for all government agencies and anyone who contracts with the government.
No comments:
Post a Comment