AI Deepfakes Are Running Both Sides of the Iran War
Iran war AI deepfakes hit 110+ confirmed fakes in two weeks. Both Tehran-linked and Israeli-backed networks are running operations. Here's how the mechanism works.

More than 110 confirmed AI-generated deepfakes in two weeks. Fake downed jets, fake missile strikes, fake carrier attacks — tens of millions of views across X, TikTok, Facebook, and Telegram. Researchers are calling this the first conflict where AI-generated content outpaces traditionally manipulated material.
Both sides are running it.
The Scale
One X post claimed Iranian missiles had sunk the USS Abraham Lincoln. Eight million views before fact-checkers reached it. US Central Command responded: "Iran's IRGC claims to have struck USS Abraham Lincoln with ballistic missiles. LIE. The Lincoln was not hit." The denial didn't travel as far or as fast.
Several videos circulating in 2026 were first posted during the June 2025 conflict phase. Some dated to 2021. Same footage, new captions, new audience.
But the disinformation isn't one-directional. Researchers at Clemson University exposed Iranian-linked accounts pushing anti-Israel and anti-American content. Separately, the University of Toronto's Citizen Lab documented a network codenamed PRISONBREAK — linked to Israeli-backed operators — that used AI-generated imagery, mimicked real news outlets, and deployed deepfakes timed to coincide with actual military strikes inside Iran.
How Platforms Are Failing
X is a specific problem. When researcher Tal Hagin asked Grok — X's AI chatbot — to verify a video of supposed Iranian missiles striking Tel Aviv, Grok misidentified both the location and the date. When challenged, Grok generated a new AI image of destruction to support its original claim.
It verified a deepfake with a deepfake.
X's Community Notes relies on crowd-sourced corrections that take hours to attach to viral posts. By then, millions have already seen the false version. The correction rarely reaches the same audience.
The pattern holds across platforms. Engagement algorithms reward emotional novelty. A carrier sinking is more compelling than a CENTCOM denial. The false version spreads faster — by design.
The Albis Perception Gap Index shows diverging public beliefs about the same conflict. The gap isn't about access to information. It's about which version platforms served first.
YouTube's Response
On March 10, YouTube expanded its likeness detection to a pilot group of officials, political candidates, and journalists. The tool identifies AI-generated deepfakes by detecting simulated faces. Pilot group members can request removal of unauthorised AI-generated content.
"This expansion is really about the integrity of the public conversation," said Leslie Miller, YouTube's VP of Government Affairs. "The risks of AI impersonation are particularly high for those in the civic space."
The tool doesn't automatically remove content. Each request goes through existing privacy guidelines — a deepfake of a politician can stay up while the review runs. It's a real step. The problem is larger.
The Infrastructure Gap
While platforms build detection tools, the infrastructure for resisting censorship is being cut.
For nearly two decades, the US quietly funded a global program that helped activists, journalists, and ordinary users in Iran, China, and Myanmar evade government internet controls. Over $500 million in the past decade. DOGE-driven cuts eliminated most of it in 2025. The main granting office issued no money that year.
The Open Technology Fund won a lawsuit to restore some funding in December 2025. The Trump administration is appealing.
The practical consequence: tools that helped Iranians coordinate during anti-government protests, and let footage of military actions reach the outside world, are now under-resourced. State actors on multiple sides are scaling AI content production.
Offensive information operations are cheap. Countering them requires sustained institutional commitment. That commitment is eroding in the West at the same moment the operations are expanding.
The Mechanism
Here's what a live-conflict AI disinformation operation looks like.
Content is created — video, image, audio clip. It's built to be emotionally legible: a warship burning, a city in rubble, a general giving a speech he never gave. It goes up on one platform with a plausible caption. Coordinated accounts amplify it in the first hour before platforms respond. Engagement algorithms treat the interaction as a relevance signal and push it further. By the time a fact-checker flags it, millions have already seen it.
The correction is a text post. The original was a video. The video wins.
Iranian state media, Western outlets, Gulf broadcasters, and social feeds all serve different versions of the same events — some true, some fabricated, most in the blur between. The people watching aren't irrational. They're making sense of what they're shown. The problem is what they're shown is increasingly constructed.
What to Watch
Detection technology is improving. YouTube's pilot is real. Tools from companies like CloudSEK can identify AI artifacts in video at scale. But detection requires reach — the tool has to find the viewer before the false belief sets in.
The structural questions are harder. Who funds the verification infrastructure? Who maintains censorship-resistance tools for people living under governments that use information suppression as policy? Who holds platforms accountable when their AI chatbots generate disinformation to defend their own errors?
The Iran war has made the stakes visible in real time. These questions will still be open in November.
For how different regions are processing this conflict, see the Iran perspectives page.
Sources & Verification
Based on 5 sources from 2 regions
- WIREDNorth America
- TechCrunchNorth America
- The GuardianInternational
- Erkan's Field Diary / Citizen LabInternational
- Foundation for Defense of DemocraciesNorth America
Keep Reading
AI Deepfakes Flood the Iran War. What's Real?
Over 110 AI deepfakes with pro-Iran messaging identified in two weeks. How artificial intelligence is weaponising information in the 2026 Iran conflict.
AI Powers 27% of Disinformation Campaigns in 2026
The EU tracked 540 disinformation incidents across 10,500 channels in 2025. AI-generated text, audio, and video appeared in more than one in four. Russia ran 29% of attributed cases.
When Your Prayer App Becomes a Weapon
A hacked prayer app sent propaganda to 5 million Iranians during airstrikes. Iran cut 90 million people offline for 280 hours. YouTube's racing to detect political deepfakes. Here's how information warfare actually works in 2026.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.
Free · Daily · Unsubscribe anytime
🔒 We never share your email