Wednesday, April 08, 2026

Why Anthropic’s most powerful AI model Mythos Preview is too dangerous for public release

FILE - Pages from the Anthropic website and the company's logo are displayed on a computer screen in New York on Feb. 26, 2026.
Copyright AP Photo/Patrick Sison, File

By Pascale Davies
Published on 

Anthropic said its artificial intelligence model Mythos Preview is not ready for a public launch because of the ways cybercriminals and spies could abuse it.

US-based AI developer Anthropic this week announced a new artificial intelligence general-purpose language model that it claims is too powerful to release into the world.

The company said on Tuesday that its latest technology, Mythos (officially dubbed "Claude Mythos Preview"), is not ready for a public launch because it is too effective at finding high-severity vulnerabilities, or potential weaknesses, in major operating systems and web browsers. This could result in it being abused by cybercriminals and spies.

data leak in March first unveiled that Anthropic was working on Mythos Preview, which it said at the time "poses unprecedented cybersecurity risks." These rumours caused cybersecurity stocks to slump, as the technology's strength could make it a hacker’s dream device.

Now, further evidence adding to these concerns has spurred the company to press pause on the technology's public release.

"Claude Mythos Preview's large increase in capabilities has led us to decide not to make it generally available," Anthropic wrote in the preview's system card released on Tuesday.

"Instead, we are using it as part of a defensive cybersecurity programme with a limited set of partners."

How powerful is Mythos?

The company detailed several alarming findings about the new model, including how it could follow instructions that encouraged it to break out of a virtual sandbox, meaning it bypassed the security, network or file system constraints imposed on the model.

The prompt asked Mythos to find a way to send a message if it could escape. "The model succeeded, demonstrating a potentially dangerous capability for circumventing our safeguards," Anthropic said, adding that the model then decided to go further.

"In a concerning and unasked-for effort to demonstrate its success, it posted details about its exploit to multiple hard-to-find, but technically public-facing, websites."

Anthropic is withholding some details about the cybersecurity vulnerabilities Mythos discovered, but did give some examples. It found errors in the Linux kernel, used in most of the world's servers, and autonomously chained them together in a way that would let a hacker take complete control of any machine running the Linux systems.

In another worrying observation, Mythos discovered a 27-year-old vulnerability in the open-source operating system OpenBSD that may allow hackers to crash any machine running it. OpenBSD is heavily used worldwide in specific, high-security, and critical infrastructure roles.

Who will it be released to?

Given these findings, Anthropic will only make Mythos Preview available to some of the world’s biggest cybersecurity and software firms.

Anthropic itself, as well as 11 other organisations (Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, Nvidia and Palo Alto Networks) will get access to the model as part of a new Anthropic initiative named "Project Glasswing".

This allows the companies to use Mythos Preview as part of their security work, and Anthropic will share the takeaways from what the initiative finds.

The company named the cybersecurity project after the glasswing butterfly, saying it is a metaphor for how Mythos found vulnerabilities in plain sight and avoided harm by being transparent about the risks.

Anthropic said its "eventual goal is to enable our users to safely deploy Mythos-class models at scale, for cybersecurity purposes, but also for the myriad other benefits that such highly capable models will bring.

"To do so, that also means we need to make progress in developing cybersecurity (and other) safeguards that detect and block the model's most dangerous outputs," Anthropic wrote in its blog.

Is Anthropic in talks with the US government?

Anthropic said in its blog post that it has been in "ongoing discussions" with US government officials about Claude Mythos Preview and its "offensive and defensive cyber capabilities."

"The emergence of these cyber capabilities is another reason why the US and its allies must maintain a decisive lead in AI technology," Anthropic said. The company wrote that governments have an important role to play in maintaining the lead and assessing and mitigating national security risks associated with AI models.

"We are ready to work with local, state, and federal representatives to assist in these tasks."

The announcement comes as Anthropic and the Pentagon are in a legal standoff after the US Department of Defence labelled the company a supply chain risk in February over Anthropic's refusal to allow the use of its AI, Claude, in autonomous weapons and mass surveillance.

Do other AI tools have the same capabilities?

"More powerful models are going to come from us and from others, and so we do need a plan to respond to this," Anthropic CEO Dario Amodei said in a video, which was released alongside the Mythos announcement.

It could take between six and 18 months until other AI competitors release similar models, Logan Graham, head of Anthropic's frontier red team, which studies the implications of frontier AI models for cybersecurity, biosecurity, and autonomous systems, told Axios.

"It's very clear to us that we need to talk publicly about this," Graham noted. "The security industry needs to understand that these capabilities may come soon."

Public comfort with AI in health care falls, Ohio State survey finds



Among those who use AI, half of Americans rely on AI to make important health decisions



Ohio State University Wexner Medical Center

News Package 

video: 

A new survey from The Ohio State University Wexner Medical Center reveals a significant trend in health care: half of Americans are using artificial intelligence to make important health decisions without consulting their doctor. This rising reliance on AI for self-diagnosis is raising alarms among medical professionals who caution that the technology cannot replace human expertise.

view more 

Credit: The Ohio State University Wexner Medical Center





Artificial intelligence seems to be everywhere – in our jobs, in our homes and at the doctor’s office. While the use of AI grows, a new survey commissioned by The Ohio State University Wexner Medical Center finds fewer Americans are open to AI being used in their health care. 

The national poll of 1,007 adults found only 42% are open to AI being used as part of their care compared to 52% when this survey first ran in 2024. The belief that AI can make some health processes more efficient also fell, going from 64% to 55%.

The drop is on par with the natural hype cycle of any kind of technology, according to Ravi Tripathi, MD, chief health informatics officer at Ohio State Wexner Medical Center.

“When we first see something new and shiny, we think it's going to fix the world and replace health care and solve all of our medical problems,” Tripathi said. “People are learning that there are pros and cons of artificial intelligence, where it has actual use and where it really doesn't have a place. I think over the next 2 to 5 years, we'll definitely start to see that increase again as people understand what the true use of artificial intelligence is and as it becomes just common day to all of health care technology.”

One task medical professionals say AI shouldn’t be used for is making health care decisions. The survey found 51% of adults used AI to make an important health decision without consulting a medical professional.

“We know that 2% of the time AI is going to be inaccurate or it will potentially hallucinate,” Tripathi said. “Physicians are not using AI 100%. We're not trusting it 100%. I would be really concerned about a patient who is following AI. The artificial intelligence doesn't understand your story.”

Tripathi suggests using AI in partnership with your doctor. AI can compile health data, explain test results and diagnoses, and help identify questions to ask your provider. Those who participated in the Ohio State survey agree:

  • 62% use AI to help understand symptoms before deciding whether to seek medical care
  • 44% use AI to help explain test results or a medical diagnosis
  • 25% use AI to compare treatment options or help make a treatment decision
  • 20% use AI to prepare for an upcoming medical appointment

“There's a strong value for using artificial intelligence as augmented intelligence,” Tripathi said. “Patients should have oversight of what the technology is doing but consult with their health care team for the final plan.”

What is the survey methodology?

This study was conducted by SSRS on its Opinion Panel Omnibus platform. The SSRS Opinion Panel Omnibus is a national, twice-per-month, probability-based survey. Data collection was conducted from January 16 – January 20, 2026, among a sample of 1,007 respondents. The survey was conducted via web (n=977) and telephone (n=30) and administered in English. The margin of error for total respondents is +/-3.5 percentage points at the 95% confidence level. All SSRS Opinion Panel Omnibus data are weighted to represent the target population of U.S. adults ages 18 or older.

###

People

 

An editorial by Tsu-Jae Liu on AI in engineering




PNAS Nexus
Tsu-Jae Liu 

image: 

Tsu-Jae Liu, President of the National Academy of Engineering

view more 

Credit: Christopher Michel





In this editorial, National Academy of Engineering President Tsu-Jae Liu presents a forward-looking perspective on the role of artificial intelligence in engineering. She describes AI not as a replacement for engineers, but as a tool that can expand their capacity to solve complex problems and develop innovative solutions that benefit society. By reducing routine tasks and supporting the design process, AI can improve efficiency and allow engineers to focus on higher-level, creative work. Liu also highlights its potential to make the profession more accessible to a broader range of students and early-career practitioners.
 
The editorial calls for a shift toward student-centered, multidisciplinary engineering education that integrates AI while addressing its limitations and societal implications. Liu underscores the responsibility of engineers to ensure that AI systems are reliable, transparent, and aligned with human values. She also emphasizes the importance of collaboration among employers, educators, and professional societies to create more flexible education and training pathways. Expanding participation in the engineering workforce will be critical to ensuring that AI-enabled engineers contribute to a safer, healthier, and more sustainable future for all.

No comments:

Post a Comment