AI Deepfakes Are Running Both Sides of the Iran War
Iran war AI deepfakes hit 110+ confirmed fakes in two weeks. Both Tehran-linked and Israeli-backed networks are running operations. Here's how the mechanism works.

The Iran war has produced more than 110 confirmed AI-generated deepfakes in two weeks, according to a New York Times analysis. Fake videos of downed American jets, fake missile strikes, and fake carrier attacks have accumulated tens of millions of views across X, TikTok, Facebook, and Telegram. Disinformation researchers are calling this the first conflict where AI-generated content outpaces traditionally manipulated material.
And both sides are doing it.
The Scale
One X post claimed Iranian ballistic missiles had sunk the USS Abraham Lincoln. It got 8 million views before fact-checkers reached it. US Central Command responded directly: "Iran's IRGC claims to have struck USS Abraham Lincoln with ballistic missiles. LIE. The Lincoln was not hit." The denial didn't travel as far or as fast.
CNN reported that fake videos of attacks and troop movements "racked up tens of millions of views" in the first two weeks alone. Several videos circulating in 2026 were first posted during the June 2025 conflict phase. Some dated to 2021. The same footage, new captions, new audience.
But the disinformation isn't one-directional. Researchers at Clemson University exposed Iranian-linked accounts pushing anti-Israel and anti-American content. Separately, researchers at the University of Toronto's Citizen Lab documented a network codenamed PRISONBREAK — linked to Israeli-backed operators — that used AI-generated imagery, mimicked real news outlets, and deployed deepfakes timed to coincide with actual military strikes inside Iran.
Both sides are running operations. That's the picture.
How Platforms Are Failing
X is a specific problem. When disinformation researcher Tal Hagin asked Grok — X's AI chatbot — to verify a video of supposed Iranian missiles striking Tel Aviv, Grok misidentified both the location and the date. When challenged, Grok generated a new AI image of destruction to support its claim.
It verified a deepfake with a deepfake.
X's Community Notes system relies on crowd-sourced corrections that can take hours to attach to viral posts. By then, millions have already seen the false version. The correction rarely reaches the same audience that saw the original.
The pattern holds across platforms. Engagement algorithms reward emotional novelty. A carrier sinking is more compelling than a CENTCOM denial. The false version spreads faster — by design.
This is why the Albis Perception Gap Index shows diverging public beliefs about the same conflict: the gap isn't about access to information. It's about which version platforms served first.
YouTube's Response
On March 10, YouTube expanded its likeness detection technology to a pilot group of government officials, political candidates, and journalists. The tool identifies AI-generated deepfakes by detecting simulated faces. Members of the pilot group can request removal of unauthorised AI-generated content.
"This expansion is really about the integrity of the public conversation," said Leslie Miller, YouTube's vice president of Government Affairs and Public Policy. "We know that the risks of AI impersonation are particularly high for those in the civic space."
The tool doesn't automatically remove flagged content. Each request is evaluated under existing privacy guidelines. A deepfake of a politician saying something false can stay up while the review process runs. It's a real step. The scale of the problem is larger.
The Infrastructure Gap
While platforms build detection tools, the infrastructure for resisting censorship is being cut elsewhere.
For nearly two decades, the US quietly funded a global program — broadly called Internet Freedom — that helped activists, journalists, and ordinary users in Iran, China, and Myanmar evade government internet controls. The program dispensed over $500 million in the past decade. In 2025, DOGE-driven cuts eliminated most of it. The main granting office issued no money in 2025.
The Guardian reported the program was "effectively gutted." The Open Technology Fund won a lawsuit to restore some funding in December 2025. The Trump administration is now appealing.
The practical consequence: tools that helped Iranians coordinate during anti-government protests, and that let footage of military actions reach the outside world, are now under-resourced. Meanwhile, state actors on multiple sides are scaling AI-generated content.
The asymmetry is clear. Offensive information operations are cheap. Countering them requires sustained institutional commitment. That commitment is eroding in the West at the same moment the operations are expanding.
The Mechanism
Here's what an AI disinformation operation looks like during a live conflict.
A piece of AI-generated content is created — a video, an image, an audio clip. It's built to be emotionally legible: a warship burning, a city in rubble, a general giving a speech he never gave. It gets posted to one platform with a plausible caption. Coordinated accounts amplify it in the first hour before platforms can respond. Engagement algorithms treat the interaction as a relevance signal and push it further. By the time a fact-checker flags it, millions have already seen it.
The correction is a text post. The original was a video. The video wins.
This is why the AI and information warfare landscape looks different depending on where you're consuming it. Iranian state media, Western outlets, Gulf broadcasters, and social feeds all serve different versions of the same events — some true, some fabricated, most somewhere in the blur between.
The people watching aren't irrational. They're making sense of what they're shown. The problem is that what they're shown is increasingly constructed.
What to Watch
Detection technology is improving. YouTube's pilot is real. Tools from companies like CloudSEK can now identify AI artifacts in video at scale. But detection requires reach — the tool has to find the viewer before the false belief sets in.
The deeper question is structural. Who funds the verification infrastructure? Who maintains the censorship-resistance tools for people living under governments that use information suppression as policy? Who holds platforms accountable when their AI chatbots generate disinformation to defend their own errors?
These questions don't resolve easily. But they're the ones that determine whether the information environment improves. The Iran war has made the stakes visible in real time.
For how different regions are processing this conflict, see the Iran perspectives page.
Sources & Verification
Based on 5 sources from 2 regions
- WIREDNorth America
- TechCrunchNorth America
- The GuardianInternational
- Erkan's Field Diary / Citizen LabInternational
- Foundation for Defense of DemocraciesNorth America
Keep Reading
When Your Prayer App Becomes a Weapon
A hacked prayer app sent propaganda to 5 million Iranians during airstrikes. Iran cut 90 million people offline for 280 hours. YouTube's racing to detect political deepfakes. Here's how information warfare actually works in 2026.
AI Wins at Spotting Fake Photos, Humans Win at Videos
A new University of Florida study reveals the detection split: AI crushes deepfake photos, but humans outperform machines at spotting fake videos.
When War Becomes Content: Information Warfare Goes Public
Governments are weaponizing information in plain sight, mixing real violence with video game footage, blocking domestic truth while projecting foreign lies, and using AI to create convincing fakes faster than detection systems can adapt.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.