Anthropic Accidentally Leaks Claude Mythos, Its Most Powerful AI Model Yet

Anthropic has confirmed it is testing a powerful new AI model called Claude Mythos after a data leak accidentally exposed internal documents describing its capabilities. The company acknowledged the model represents "a step change" in AI performance and is "the most capable we've built to date."
What Happened
A misconfiguration in one of Anthropic's external content management systems left nearly 3,000 unpublished assets — including draft blog posts, images, PDFs, and research papers — in a publicly accessible data cache. The CMS defaulted uploaded files to public access unless explicitly restricted, allowing anyone with technical knowledge to query the system and retrieve documents without authentication.
Fortune discovered and reported the leak on Thursday evening. Anthropic subsequently secured the exposed data and attributed the issue to "human error in the CMS configuration."
What Is Claude Mythos?
According to the leaked draft blog posts, Claude Mythos introduces a new model tier called Capybara — positioned above the existing Opus tier. The document describes it as "a new name for a new tier of model: larger and more intelligent than our Opus models."
Compared to Claude Opus 4.6, Anthropic's current flagship, the Capybara-tier Mythos model achieves "dramatically higher scores" on tests of:
- Software coding — significant improvements in programming tasks
- Academic reasoning — stronger performance on complex reasoning benchmarks
- Cybersecurity — far ahead of any other AI model in cyber capabilities
The Cybersecurity Concern
Perhaps the most striking detail from the leak is Anthropic's own assessment of the model's cybersecurity capabilities. A draft blog post stated that Claude Mythos is "currently far ahead of any other AI model in cyber capabilities" and "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders."
Under Anthropic's own Preparedness Framework, the model has been classified as "high capability" for cybersecurity tasks — a designation that triggered the company's cautious release approach.
Release Strategy
Anthropic is planning a deliberate, phased rollout rather than a broad public launch. The initial strategy focuses on:
- Providing early access to select customers
- Giving cyber defenders advance preparation time
- Limiting availability due to the model's high operational costs
No timeline for general public availability has been announced.
The CEO Summit
The same cache of exposed documents revealed details of an exclusive, invite-only CEO retreat planned in the English countryside. Hosted at an 18th-century manor-turned-hotel, the event will feature demonstrations of unreleased Claude capabilities for what Anthropic describes as Europe's "most influential business leaders," with CEO Dario Amodei scheduled to attend.
Anthropic's Response
An Anthropic spokesperson confirmed the company is developing "a general purpose model with meaningful advances in reasoning, coding, and cybersecurity," while emphasizing deliberate release planning. The company was quick to note that the leak was "unrelated to Claude, Cowork, or any Anthropic AI tools" and that no "core infrastructure, AI systems, customer data, or security architecture" was compromised.
What This Means
The accidental disclosure puts Anthropic — a company that has built its reputation on AI safety — in an uncomfortable position. The irony of a safety-focused AI lab exposing its own most sensitive documents through a basic CMS misconfiguration has not been lost on the industry.
More broadly, the leak confirms the rapid pace of AI capability advancement. If Anthropic's internal assessments are accurate, Claude Mythos represents a generational leap that could reshape both the potential and the risks of frontier AI systems.
Source: Fortune
Discuss Your Project with Us
We're here to help with your web development needs. Schedule a call to discuss your project and how we can assist you.
Let's find the best solutions for your needs.