The Pentagon Banned Its Best AI. Now Staff Are Using Excel.
Three weeks after blacklisting Anthropic's Claude, Pentagon workers are reverting to spreadsheets while officials quietly bet the ban won't last.

The Pentagon ordered every branch, contractor, and partner to stop using Anthropic's Claude AI on March 3. Three weeks later, staff at the Department of Defense are querying datasets with Microsoft Excel, developers have lost the coding tool they built their workflows around, and at least one federal agency is openly betting the ban gets reversed before it takes effect.
The situation is, by any reasonable measure, a mess. And it reveals something uncomfortable about how fast AI became load-bearing inside the world's most powerful military.
How Claude Became the Pentagon's Operating System
When Anthropic signed a $200 million, two-year contract with the Pentagon's Chief Digital and Artificial Intelligence Office in July 2025, it looked like a clean win for both sides. The military got what officials widely considered the most capable AI model available. Anthropic got the credibility — and the revenue — of a major defense partnership.
Claude became the first AI model approved to operate on classified military networks. Through Palantir's Maven Smart Systems platform, it was integrated into intelligence analysis, weapons targeting, and operational planning. Adoption was strong. Within months, Claude wasn't just a tool the Pentagon used — it was embedded in how the Pentagon worked.
When the US launched strikes against Iran on February 28, Claude was there too. Sources told CBS News and Reuters that the military used Claude for target selection, battlefield simulations, and intelligence queries during the operation — and continued using it even after the ban was announced.
An IT contractor who spoke to Reuters put it plainly: Claude "is the best." The alternative from xAI, Grok, "often produced inconsistent answers to the same query."
The Dispute That Broke Everything
The ban didn't come from a security failure or a data breach. It came from a contract negotiation.
The Pentagon wanted Anthropic to agree that Claude could be used for "all lawful uses" with no exceptions. Anthropic pushed back on two specific points: the model should not be used for mass surveillance of Americans, and it should not power fully autonomous weapons without human oversight.
Those aren't fringe positions. OpenAI's subsequent deal with the Pentagon included essentially the same carve-outs. But Anthropic drew the line first, and Defense Secretary Pete Hegseth responded by designating the company a "supply chain risk" — a classification normally reserved for firms with foreign ownership concerns or security vulnerabilities, not domestic AI companies in a contract dispute.
Anthropic sued. The legal challenge argues the designation is "legally unsound" and that Hegseth lacks the statutory authority to extend it beyond direct Pentagon contracts. Three federal contract experts told WIRED they couldn't determine which Anthropic customers, if any, are actually covered by the order.
Dean Ball, a senior fellow at the Foundation for American Innovation and former White House AI policy adviser, called it "the most shocking, damaging, and overreaching thing I have ever seen the United States government do."
The Slow-Motion Unraveling
Here's where it gets interesting. Three weeks into the ban, compliance is patchy at best.
Pentagon officials told Reuters that tasks previously handled by Claude — querying large datasets, analyzing information, generating reports — are in some cases being done manually with Microsoft Excel. That's not a minor inconvenience. It's a productivity collapse for teams that had finally gotten comfortable using AI after years of slow adoption.
Claude Code, Anthropic's software development tool, was widely used within the Pentagon to write code. Developers who built custom AI agents to sift through massive intelligence datasets now face losing months of work if they switch platforms.
The biggest headache sits with Palantir. Its Maven Smart Systems — worth over $1 billion in Pentagon contracts — uses multiple prompts and workflows built on Claude Code. Replacing Claude means rebuilding parts of the software and then recertifying the entire system for classified military networks. Joe Saunders, CEO of government contractor RunSafe Security, told Reuters that recertification alone could take 12 to 18 months.
"It's not just costly, it's a loss of productivity," Saunders said.
Some staff are complying because "no one wants to end their career over this," as one official put it. But others are "slow-rolling" the replacement — continuing to use Claude to build workflows even as they're technically supposed to be unwinding it.
At least one chief information officer at a federal agency told Reuters it plans to deliberately delay the phase-out, betting that the government and Anthropic will reach a deal before the six-month deadline.
The Strategic Gamble
The Pentagon now faces a question it created for itself: pivot fast to OpenAI, Google, or xAI — or unwind Anthropic slowly enough that it can snap back if the dispute resolves?
The fast-pivot option sounds clean, but it isn't. Every alternative model would need to be recertified for classified networks. The developers who built AI agents on Claude would need to rebuild them from scratch. And the military would be swapping a model that officials consider superior for alternatives they view as less capable — during an active conflict with Iran.
The slow-unwind option is what's actually happening. But it's creating a shadow system where Claude is technically banned but practically essential, where compliance is uneven, and where the Pentagon's AI infrastructure exists in a state of organized uncertainty.
Roger Zakheim, director of the Ronald Reagan Presidential Foundation and Institute, framed it carefully: "What we are seeing play out here is the tension of adoption, both inside the Pentagon as well as the political level."
What This Actually Tells Us
The Claude ban is a case study in what happens when a government becomes dependent on a technology faster than its institutions can absorb the implications.
In eight months — from July 2025 to March 2026 — Claude went from new contract to classified-network deployment to active wartime use to political blacklisting to quiet indispensability. That timeline is faster than most Pentagon procurement cycles for office furniture.
The irony is thick. Anthropic's two conditions — no mass surveillance of Americans, no fully autonomous weapons — are positions that most Americans would probably agree with. OpenAI negotiated the same terms and got a deal. The difference wasn't the substance. It was the sequence. Anthropic said no first, in public, while the administration was preparing for war.
Now the Pentagon is learning a lesson that every organization learns eventually: banning a tool people depend on doesn't make the dependency disappear. It just makes the work worse.
Whether Anthropic gets reinstated, replaced, or lingers in this liminal state may depend less on policy and more on how long Pentagon staff can tolerate doing AI work in spreadsheets. Based on what they're telling reporters, the answer is: not very long at all.
Sources & Verification
Based on 5 sources from 3 regions
- ReutersInternational
- Military TimesNorth America
- WIREDNorth America
- The GuardianEurope
- World Policy HubInternational
Keep Reading
Anthropic Said No to Killer Robots. The Pentagon Replaced Them in a Week.
Defense contractors are purging Claude from their systems. xAI and OpenAI are moving in with no ethical restrictions. The AI safety experiment just got its verdict.
The AI Safety Company Just Dropped Its Safety Promise. Then the Pentagon Called.
Anthropic ditched its core safety pledge and faces a Pentagon ultimatum — all in the same week. The company built on caution is learning what happens when safety meets power.
AI Is Already Choosing Who Dies in Two Wars. The Only Company That Said No Got Blacklisted.
Palantir's Maven helped select 1,000 Iran targets in 24 hours. Ukraine is sharing kill data to train allied AI. Anthropic refused — and got banned.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.