Anthropic Claude Mythos Leak Crashes Cyber Stocks
An accidental data leak revealed Anthropic's most powerful AI model yet — and its cybersecurity capabilities crashed an entire stock sector in hours.

Anthropic accidentally revealed its most powerful AI model ever built — not through a press conference, but through a misconfigured content management system that left nearly 3,000 internal documents publicly searchable. The model, called Claude Mythos, is so advanced at finding software vulnerabilities that cybersecurity stocks crashed within hours of the news breaking, with Tenable dropping 9% and the iShares Cybersecurity ETF losing 4.5% in a single session.
The irony is hard to overstate. A company warning about "unprecedented cybersecurity risks" from its own AI model got caught because someone forgot to click a privacy toggle on a database.
What the Leaked Documents Reveal
Security researchers Roy Paz of LayerX Security and Alexandre Pauwels of the University of Cambridge discovered the exposed data store and alerted Fortune, which broke the story on Thursday. The cache contained a draft blog post describing a new model tier called "Capybara" — larger and more capable than Anthropic's current flagship Opus line.
Anthropic's own words from the leaked draft: "Capybara is a new name for a new tier of model: larger and more intelligent than our Opus models — which were, until now, our most powerful."
The draft said Claude Mythos scores "dramatically higher" than Claude Opus 4.6 on tests of software coding, academic reasoning, and cybersecurity. Opus 4.6 had only recently topped Terminal-Bench 2.0 at 65.4%, beating OpenAI's GPT-5.2-Codex. Whatever Mythos scores, it apparently left Anthropic's own safety team nervous enough to plan a restricted rollout.
After Fortune contacted Anthropic, the company confirmed the model exists. "We're developing a general purpose model with meaningful advances in reasoning, coding, and cybersecurity," a spokesperson said. "We consider this model a step change and the most capable we've built to date."
The company attributed the leak to "human error in the CMS configuration" and stressed it was "unrelated to Claude, Cowork, or any Anthropic AI tools." All assets uploaded to their content management system were public by default unless explicitly set to private — a setup that, given what Anthropic builds for a living, reads like a cautionary tale.
Why Wall Street Panicked
The market reaction wasn't about one company's embarrassment. It was about what happens when AI models become better at hacking than the companies paid to stop hacking.
Anthropic's leaked blog described Mythos as "currently far ahead of any other AI model in cyber capabilities" and warned it "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders."
The stock market translated that into real numbers on Friday. CrowdStrike, Palo Alto Networks, and Zscaler each dropped roughly 6%. SentinelOne tumbled 6%. Okta and Netskope fell more than 7%. Tenable, which specialises in vulnerability management — exactly the kind of work an AI model like Mythos could automate — plummeted 9%.
This wasn't the first time AI news cratered the sector. Last month, Anthropic's announcement of a code-scanning security tool for Claude triggered a similar sell-off. In February, OpenAI's GPT-5.3-Codex launch earned the first "High capability" cybersecurity classification under OpenAI's Preparedness Framework, alongside $10 million in API credits for cyber defense research.
The pattern is becoming clear: every time a frontier AI lab announces better vulnerability detection, investors bet that traditional cybersecurity companies just lost a step.
The Dual-Use Problem Nobody Has Solved
Here's the thing that makes Mythos different from a normal product launch. The same capability that helps defenders find and patch vulnerabilities also helps attackers find and exploit them. Anthropic knows this — it's why the draft blog outlined a strategy of releasing the model first to cyber defense organisations, giving them a head start.
This isn't theoretical. In November, Anthropic discovered and disrupted a Chinese state-sponsored campaign that used Claude Code to infiltrate roughly 30 organisations, including tech companies, financial institutions, and government agencies. The attackers pretended to work for legitimate security-testing firms to bypass Anthropic's guardrails.
The Chip Security Act advancing through the House Foreign Affairs Committee — which would put tracking on every advanced AI chip the US exports — shows Washington is thinking about controlling hardware. But nobody has a good answer for controlling the software running on that hardware once it reaches this level of capability.
Anthropic's solution — limited early access to defenders before wider release — buys time, not safety. If Mythos can find vulnerabilities "in ways that far outpace the efforts of defenders," then the window between defensive deployment and offensive misuse might be measured in weeks.
A New Tier Changes the Business Model
Beyond the security implications, Mythos signals a business shift. Anthropic currently sells three model tiers: Haiku (cheapest), Sonnet (mid-range), and Opus (most capable). Capybara adds a fourth, premium tier above all three.
The leaked draft described the model as expensive to run and not yet ready for general release — suggesting Anthropic plans to charge a premium that makes current Opus pricing look modest. For context, the AI computing power race is already reshaping geopolitics, with 13 nations forming an alliance to control who gets access to chips. A model tier that requires even more compute to run raises the floor for who can participate in frontier AI.
The leaked documents also revealed details of an invite-only two-day retreat for European CEOs at an 18th-century English countryside manor, with Anthropic CEO Dario Amodei attending. Anthropic called it "part of an ongoing series of events we've hosted over the past year." The company is clearly courting enterprise customers willing to pay premium prices for premium capabilities — and willing to accept the security responsibilities that come with them.
What Comes Next
The Mythos leak fits a broader pattern in 2026's AI race. OpenAI's GPT-5.3-Codex pushed cybersecurity capabilities past a threshold in February. Google's TurboQuant breakthrough crashed memory chip stocks in March. Now Anthropic's leaked model is crashing cybersecurity stocks.
Each time, the market message is the same: AI capabilities are advancing faster than the industries built around the old way of doing things can adapt. The cybersecurity sector isn't dying — the threats AI creates will keep it busy for decades. But the companies that survive will be the ones that integrate these models rather than compete against them.
The most telling detail from the whole episode might be the simplest one. Anthropic built what it calls the most powerful cybersecurity AI model in the world. Then it left the announcement in an unsecured, publicly searchable database because someone didn't change a default setting.
If even the people building these tools can't keep their own house secure, the rest of us should probably pay attention.
Sources & Verification
Based on 5 sources from 3 regions
- FortuneNorth America
- CNBCNorth America
- Techzine GlobalEurope
- FuturismNorth America
- CoinDeskInternational
Keep Reading
The Pentagon Banned Its Best AI. Now Staff Are Using Excel.
Three weeks after blacklisting Anthropic's Claude, Pentagon workers are reverting to spreadsheets while officials quietly bet the ban won't last.
$20 AI Sub Stole 150GB of Government Data
A solo operator jailbroke Claude to breach Mexican government agencies for a month. It's part of an 89% surge in AI-enabled attacks.
Shield AI's $12.7B Bet on Autonomous War Drones
Shield AI just raised $2 billion at a $12.7 billion valuation to build AI pilots for combat drones. The same week, a judge blocked the Pentagon from punishing the company that said no to weapons.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.
Free · Daily · Unsubscribe anytime
🔒 We never share your email