Fake Iran War Photos Hit 100M Views, Both Sides 2026
Both sides of the Iran war flood social media with AI deepfakes, recycled footage, and fabricated victories. 100 million people saw a fake sunken warship. Here's how the machine works.

A single AI-generated image of a sunken US aircraft carrier has been viewed more than 100 million times since the Iran war began on February 28. The carrier — the USS Abraham Lincoln — is still sailing. CENTCOM confirmed the missiles "didn't even come close." But 100 million people saw the wreckage before anyone checked.
This is the first major war where both sides produce industrial-scale fabricated imagery. The volume has overwhelmed every fact-checking system that exists. NewsGuard tracked 50 false claims in 25 days — two per day, accelerating. The NYT identified over 110 AI-generated images and videos in two weeks. Cyabra documented a single pro-Iran campaign using tens of thousands of fake accounts to spread AI-generated war footage across major platforms — 145 million views in under 14 days.
Both sides are doing it. The part nobody wants to talk about: the people catching the lies can't agree on which lies matter.
How the Machine Works
The production pipeline has three tiers, each harder to catch than the last.
Tier one: recycled footage. Old clips relabelled as current combat. AFP caught images of burning vehicles in Tel Aviv that actually showed January 2026 protests in Tehran. Snopes debunked a "new" Iranian strike video as June 2025 footage. Arabic Reuters fact-checked a 1967 Vietnam War photograph shared as a Houthi naval strike. A fabricated Burj Khalifa fire video — rated 99.9% AI-generated by Hive — circulated widely before anyone flagged it. Tier two: video game clips. BBC Verify traced the war's most viral fake — the 70-million-view "Iranian missile destroys US fighter jet" — to a military flight simulator. Right camera angle, right smoke trail, right explosion. It looked real because it was designed to look real, just not for this. Tier three: purpose-built AI deepfakes. IRGC-linked accounts flood X, Instagram, and Bluesky with AI-generated content, including deepfakes mocking Trump styled after Lego movies. A Clemson University study found these campaigns reaching millions. Iranian state broadcaster IRIB TV1 has aired fabricated footage — in one case, showing muted video of an Israeli attack on Iran while narrating it as Iran striking Israel.The White House isn't clean either. It's posted roughly a dozen "hype videos" to X and TikTok, weaving Call of Duty footage and Hollywood clips into real military operations. The line between propaganda and entertainment has dissolved on every side.
The Fact-Checkers Are Fact-Checking in Opposite Directions
This is where the story bends into something new.
PGI 6.33, with US-Middle Eastern outlets diverging at 7.5. The reason: coverage of disinformation is itself divergent. Each region admits deepfakes exist — then focuses almost exclusively on the other side's fabrications.
Western media leads with the fake sunken warship. It's Exhibit A in every English-language analysis of information warfare. Arabic media leads with fabricated atrocity images and doctored Israeli military claims. Iranian media — in a rare moment of self-examination — published a piece on Melliun.org titled "The Hidden War on Networks: How Iran Targets American Public Opinion." Tehran's own press analysing its own propaganda operation. Visible only in Farsi.
Each region uses the deepfake story to bolster its own credibility while undermining the other's. Trust collapse gets framed as their problem, not ours.
Read Western coverage and you'll walk away thinking Iran's the primary fabricator. Read Arabic coverage and you'll blame the US and Israel. Both are partly right. Neither sees the full picture.
When AI Breaks Its Own Verification
Then there's the Grok problem.
When rumours of Netanyahu's death circulated in mid-March, the PM posted a coffee shop video to prove he was alive. Users asked X's AI chatbot Grok to verify it. Grok declared it "100% sure — it's an advanced AI deepfake." It wasn't. Netanyahu was alive. The video was real.
AI generated the fakes. Then AI "verified" the fakes. The loop closed. Carnegie's Steven Feldstein told Deadline: "The advent of gen AI propaganda and the further erosion of trust in gatekeeping institutions make it even more difficult to combat the spread of industrial-level fabricated information."
The next evolution is what Feldstein calls the "shallow fake" — not fabricating content outright but tweaking real content just enough to change its meaning. Muted real footage with false narration. Real photos with wrong captions. Genuine quotes spliced into fabricated contexts. Harder to flag because they contain real elements.
The Same Crisis Hit US Elections
The war deepfake pipeline and the election deepfake pipeline are converging.
In March, the NRSC released a deepfake attack ad against Texas Senate candidate James Talarico — an AI-generated version appearing to make statements he never made. The ad included an "AI GENERATED" label, but CNN reported it was "small, faint, and confined to a bottom corner." Reuters found Republicans are using the tech "more frequently than Democrats this cycle," with at least three deepfake ads identified.
Only 22 states have deepfake election laws. Federal legislation's stalled. A new pro-AI PAC backed by Trump allies plans to spend $100 million on the 2026 midterms, with AI-generated content central to its strategy.
The tech fabricating war casualties is the same tech fabricating political candidates. Different targets. Same pipeline.
What 100 Million Views Actually Means
One hundred million views on a single fake image isn't a statistic. It's a structural failure.
Verification systems built for a pre-AI internet — fact-checkers, content moderation, media literacy campaigns — work on a timescale of hours or days. The fabrication pipeline works in minutes. By the time AFP debunks a fake, it's already reshaped perception for tens of millions. The correction never travels as far as the lie.
DigiCert tracked nearly 5,800 cyberattacks from about 50 Iran-linked groups since the war began, many combining hacked content with fabricated visuals. One operation timed fake bomb shelter app downloads to coincide with actual missile strikes — Israelis fleeing to shelters got texts that installed spyware instead of safety information.
IRGC spokesman Ali Mohammad Naini claimed 650 American troops were killed or wounded in the first two days. CENTCOM confirmed six. The claim circulated anyway.
Where This Goes
Nobody's solved this. Detection tools can't keep pace. Platform policies aren't enforced at speed. Media literacy helps individuals but doesn't touch industrial-scale production.
In 2026, seeing isn't believing — and the tools meant to restore trust might be making it worse. The war for what's real isn't a sideshow to the Iran conflict. It is the conflict, fought on a second front where casualties are measured in the slow death of shared reality.
The fake carrier sank. The real one sails on. A hundred million people saw the wrong version. Most still don't know.
Sources & Verification
Based on 5 sources from 3 regions
- DeadlineNorth America
- US News (AP)North America
- CyabraInternational
- EuronewsEurope
- CNNNorth America
Get the daily briefing free
News from 7 regions and 16 languages, delivered to your inbox every morning.
Free · Daily · Unsubscribe anytime
🔒 We never share your email

