AI Is Already Choosing Who Dies in Two Wars. The Only Company That Said No Got Blacklisted.
Palantir's Maven helped select 1,000 Iran targets in 24 hours. Ukraine is sharing kill data to train allied AI. Anthropic refused — and got banned.

Palantir's Maven AI system helped US commanders select 1,000 Iranian targets in the first 24 hours of airstrikes. Israel's Lavender system — trained on Gaza — is now picking targets in Tehran. And two days ago, Ukraine invited the world's militaries to train their own kill-chain AI on real battlefield footage of drones hitting people.
AI isn't coming to war. It arrived. And this week, the only major AI company that refused to participate got blacklisted by the US government.
Three Wars, Three AI Systems, One Week
Here's what happened in the last seven days.
In Iran, the US struck more than 2,000 targets in four days. That pace would've been physically impossible a decade ago. The bottleneck in modern warfare isn't bombs — it's deciding where to drop them. Palantir's Maven Smart System crunches satellite imagery, drone feeds, intercepted communications, and human intelligence into target recommendations at machine speed. According to Asia Times, Maven processed enough data to recommend 1,000 strikes in a single day.
Israel's running its own system. Lavender, first exposed by Israeli magazine +972, builds profiles of suspected militants from phone data, social connections, and movement patterns. Its companion system Gospel identifies buildings. Together, they've shaped the targeting behind more than 70,000 deaths in Gaza. Arms industry insiders told The Independent that Israel is "assumed to be using both AI systems in Iran."
Meanwhile in Ukraine, Defense Minister Mykhailo Fedorov announced on March 12 that Kyiv would open its battlefield data — real drone footage of real strikes on real people — to allied nations and defense companies. "The future of warfare belongs to autonomous systems," Fedorov wrote. The program is built inside Ukraine's Ministry of Defense, with security audited to US NIST standards.
The International Committee of the Red Cross has opposed automated targeting without human oversight. Ukraine's response: we must "outperform Russia in every technological cycle."
The Speed Problem
The military term is "kill chain" — the sequence from identifying a target to striking it. AI compresses every link.
Before Maven, selecting a single target could take days of analyst work. Now it takes minutes. The system matches weapons to targets, estimates collateral damage, and queues strike packages faster than any human team could. A Pentagon presentation this week showed Maven's interface with dozens of red icons across Iran — each one a recommended strike.
One of those marks, according to The Register, corresponded to Minab, where a missile struck near a girls' school. Tehran says more than 160 people died.
The question isn't whether AI makes militaries faster. It does. The question is whether faster is better when "faster" means less time to catch mistakes.
Lavender's own developers told +972 it had a 10% error rate. One in ten people it flagged wasn't who the system thought they were. At industrial scale — tens of thousands of names in a database — that's thousands of wrong targets.
The Company That Said No
While this was unfolding, one of the world's leading AI companies was fighting a different battle.
Anthropic, maker of Claude, had been the first AI system deployed on the Pentagon's classified networks. But when the Defense Department pushed to use Claude for autonomous weapons and mass surveillance of US citizens, Anthropic drew two red lines: no autonomous killing, no domestic spying.
The Pentagon's chief technology officer, Emil Michael, called those restrictions "an irrational obstacle." Defense Secretary Pete Hegseth accused Anthropic of "arrogance and betrayal." On March 4, the Trump administration blacklisted the company, designating it a supply chain risk — the corporate equivalent of a kill shot.
Defense contractors were ordered to stop using Claude immediately. CNBC reported that companies told employees to switch to other AI providers within days.
OpenAI stepped in. CEO Sam Altman said his company would "see if there is a deal" that works with its principles, asking only that the contract exclude "unlawful" uses and domestic surveillance. The distinction is subtle but real: OpenAI drew one red line where Anthropic drew two.
Microsoft filed an amicus brief supporting Anthropic's legal challenge. But the message to every AI company was clear: cooperate or be cut off.
The Drone Videos Nobody's Talking About
Ukraine's data-sharing program deserves more attention than it's getting.
What Kyiv is offering isn't sanitized training data from a lab. It's footage of drones finding, tracking, and killing people in active combat. Labeled, catalogued, and ready to feed into neural networks.
For defense companies building autonomous targeting software, this is the holy grail. Real-world data compresses development timelines in ways no simulation can match. For governments, it's a shortcut — field AI-enabled weapons without generating your own combat datasets.
Ukraine's DELTA battlefield management system already uses neural networks to automatically detect ground and aerial targets in real time. As Deputy Defense Minister Yuriy Myronenko put it: "You can control only with data. Otherwise, I don't even know how you can control such a number of drones, people, front lines."
Two Phantom MK-1 humanoid robots — made by US startup Foundation, with $24 million in military contracts — were sent to Ukraine in February for frontline reconnaissance. TIME reports that Foundation is preparing Phantoms for combat deployment. The Pentagon "continues to explore militarized humanoid prototypes designed to operate alongside war fighters."
AI-powered drones in Ukraine are already firing autonomously. Russian radio jamming makes remote control impossible, so the machines assess targets and shoot on their own. This isn't theoretical. It's happening now.
What's Actually at Stake
The Albis Perception Gap Index scored Israel's Lavender and Gospel AI targeting systems at 7.0, with Middle Eastern and US outlets diverging most sharply on whether these tools represent military precision or automated atrocity.
That gap captures the core tension. Every military in the world wants AI that kills faster and more accurately. Every human rights organization wants someone accountable when those systems get it wrong.
Right now, both things are true simultaneously: AI targeting is producing strike campaigns of unprecedented speed, and it's producing errors at a rate that, scaled up, means hundreds or thousands of wrong targets.
The Anthropic standoff showed what happens when a company tries to hold the line. It gets replaced. Ukraine's data program shows the direction of travel: more autonomy, more speed, less human oversight. The Phantom robots show what's next after drones.
There's no treaty governing AI weapons. No international agreement on autonomous targeting. The Geneva Conventions were written for a world where humans made every decision to kill.
That world ended sometime in the last two years. We just haven't caught up yet.
Sources & Verification
Based on 5 sources from 3 regions
- The IndependentEurope
- Military TimesNorth America
- Asia TimesAsia-Pacific
- The GuardianEurope
- TIMENorth America
Keep Reading
Anthropic Said No to Killer Robots. The Pentagon Replaced Them in a Week.
Defense contractors are purging Claude from their systems. xAI and OpenAI are moving in with no ethical restrictions. The AI safety experiment just got its verdict.
The US Just Put Humanoid Soldiers in a War Zone. Nobody Voted on This.
Foundation's Phantom robots deployed to Ukraine in February. Pentagon wants thousands more by 2027. The UN's 2026 deadline for regulation is being ignored.
The AI Safety Company Just Dropped Its Safety Promise. Then the Pentagon Called.
Anthropic ditched its core safety pledge and faces a Pentagon ultimatum — all in the same week. The company built on caution is learning what happens when safety meets power.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.