The US Just Put Humanoid Soldiers in a War Zone. Nobody Voted on This.
Foundation's Phantom robots deployed to Ukraine in February. Pentagon wants thousands more by 2027. The UN's 2026 deadline for regulation is being ignored.

In February 2026, two humanoid robots arrived in Ukraine. They weren't there to clear rubble or deliver supplies. They were there for frontline reconnaissance.
Foundation, a San Francisco startup, deployed its Phantom MK-1 robots to test them in combat conditions. The company already has $24 million in contracts with the US Army, Navy, and Air Force. It's planning to build 10,000 units by the end of this year. 40,000 by 2027.
Each Phantom stands 5 feet 9 inches tall. Weighs 180 pounds. Can carry 44 pounds of payload—including rifles, grenade launchers, any weapon a human soldier can wield. The co-founder is a 14-year Marine Corps veteran who says there's a "moral imperative" to send robots into war instead of soldiers.
The Pentagon is "exploring the development of militarized humanoid prototypes designed to operate alongside war fighters in complex, high-risk environments." Foundation is also in talks with the Department of Homeland Security about deploying Phantoms along the US southern border.
Nobody voted on this. No treaty was signed. No public debate happened. The shift from drones to humanoid soldiers carrying rifles is just... happening.
The Race Nobody's Calling a Race
Here's what's accelerating right now:
Foundation's timeline: 2 units in Ukraine (February 2026), 10,000 units by December 2026, 40,000 by end of 2027. Pentagon's Replicator initiative: Deputy Defense Secretary Kathleen Hicks announced plans to field "multiple thousands" of attritable autonomous systems across air, sea, and land within 18-24 months. That deadline hits in 2027. Scout AI: Another firm building AI that handles the entire kill chain—identify, locate, neutralize—without human intervention. Currently negotiating $225 million in Pentagon contracts. The co-founder says, "Over the next five years you're not going to have people flying drones anymore. It just will not make sense." Ukraine: Already launches 9,000 drones daily. AI-powered quadcopters attack without humans in the loop when Russian jamming cuts communications. In late January, three Russian soldiers surrendered to an armed ground robot. A Marine veteran who took Phantom to Ukraine called it "really shocking"—a "complete robot war, where the robot is the primary fighter and the humans are in support."This isn't theory. It's not five years out. It's February 2026 and humanoid robots with weapon mounts are being tested on battlefields.
The UN Deadline Being Ignored
In 2023, UN Secretary-General António Guterres called lethal autonomous weapons "politically unacceptable" and "morally repugnant." He set a 2026 deadline for member states to agree on a legally binding instrument prohibiting autonomous weapons that function without human control or oversight.
That deadline is this year. Nine months away.
On March 3, the chair of the Geneva talks on autonomous weapons said progress is "urgently needed" but admitted Guterres' 2026 deadline will "realistically be missed."
Meanwhile, Foundation has Eric Trump as an investor and chief strategic adviser. On February 28, President Trump ordered federal agencies to stop doing business with Anthropic—the AI company that required its tech couldn't be used for autonomous weapons or surveilling American citizens. The White House refused to be bound by those restrictions.
The gap between what the UN is calling for (legally binding prohibition) and what's actually happening (billions in contracts, battlefield testing, production scaling to tens of thousands of units) has never been wider.
The Arguments For and Against
Why companies say this is better:Robots don't get tired, scared, or traumatized. They don't commit stress-induced war crimes. They can operate in radiation, chemicals, biological agents. Foundation's co-founder argues that massive armies of humanoid robots will create a deterrent like nuclear weapons—making conflict less likely.
He envisions "droid battles, with a bunch of drones overhead and humanoids walking out towards each other" as an "economic conflict" instead of human slaughter. "I think that's all for the better."
Why critics say this is dangerous:Lowering the human cost of war lowers the political barrier to starting wars. Robots make killing easier to justify. Responsibility gets blurred when a machine pulls the trigger.
"It's a question of human dignity," says Peter Asaro, chair of the International Committee for Robot Arms Control. "These machines are not moral or legal agents, and they'll never understand the ethical implications of their actions."
There's also the slippery slope. Current Pentagon policy keeps humans in the kill chain. But in Ukraine, AI drones already fire autonomously when jamming cuts control. If an adversary allows full autonomy, what stops the US from reciprocating in the fog of war?
Jennifer Kavanagh, director of military analysis at Defense Priorities: "The appeal of automating things and having humans out of the loop is extremely high. The lack of transparency between the two sides of any conflict creates additional concerns."
Then there's the domestic angle. ICE officers are swarming US cities. National Guard deployed to six states last year. Local police equipped with armored vehicles from the Forever Wars. Now add AI-powered soldiers with "opaque mission directives and chains of command" into that mix. Civil liberties groups are alarmed.
And algorithmic bias in facial recognition is well-documented. Who programs the threshold for threat assessment? If a child runs toward a Phantom holding scissors, does the robot understand the threat is minimal? Or does it calculate probabilities and fire?
The Tech Is Years Ahead of the Rules
Humanoid robots are expensive ($150,000 per Phantom, though expected to drop below $100,000 by 2028). They're heavy. They need regular recharging. They break down. Mud, dust, driving rain—these are real challenges.
But they unlock something drones can't: they can use every weapon system the military already owns. Every rifle, grenade launcher, humvee. Mike LeBlanc, Foundation's co-founder: "We need something that can interact with all of these. So having a humanoid really unlocks the entire US military."
They also present new risks. Captured drones are already a major source of leaked intelligence. A hacked humanoid could be turned against its own side. Spoofing radio frequencies could hijack entire fleets.
And if embodied AI can't properly assess a situation—if it sees scissors as a weapon, a civilian gesture as aggression—the mistakes won't just be software bugs. They'll be deaths.
What Happens Next
Foundation is testing Phantoms with the Marine Corps "methods of entry" course—training robots to put explosives on doors to help troops breach sites more safely.
The company is in "very close contact" with Homeland Security about patrol functions along the southern border.
Scout AI is building systems where "agents replace all of the kill chain" and are "way better and faster and smarter" than humans.
Australia's military is already using Anduril's Ghost Shark autonomous submarine. The vice chief of the Australian Defence Force told TIME the country will "continue to invest in and adopt autonomous and uncrewed systems."
China and Russia are developing their own humanoid military robots. The arms race is global.
By the time regulations catch up, tens of thousands of humanoid soldiers could be operational. The question isn't whether this technology exists—it does. The question is whether democracies will decide collectively how it gets used, or whether companies and militaries will decide unilaterally and present it as fait accompli.
Right now, it's the latter. Two robots went to Ukraine in February. Nobody asked voters. Nobody signed a treaty. The UN's 2026 deadline is being ignored.
The shift from human soldiers to humanoid ones isn't coming. It's here. And it's moving faster than the conversation about whether it should be allowed to happen at all.
Sources & Verification
Based on 4 sources from 0 regions
Keep Reading
AI Is Already Choosing Who Dies in Two Wars. The Only Company That Said No Got Blacklisted.
Palantir's Maven helped select 1,000 Iran targets in 24 hours. Ukraine is sharing kill data to train allied AI. Anthropic refused — and got banned.
Putin Just Authorized 2.4 Million Troops. The Real Question Is Whether He Can Find Them.
The decree sets Russia's military at 1.5M personnel — a 50% increase from pre-war levels. But the number on paper isn't the number in the field. Historically, Russia fills 70-80% of authorized positions. And right now, recruitment's falling while losses are rising.
Anthropic Said No to Killer Robots. The Pentagon Replaced Them in a Week.
Defense contractors are purging Claude from their systems. xAI and OpenAI are moving in with no ethical restrictions. The AI safety experiment just got its verdict.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.