AI Deepfakes Are Flooding the Iran War — and Nobody Can Tell What's Real
Over 110 AI deepfakes with pro-Iran messaging identified in two weeks. How artificial intelligence is weaponising information in the 2026 Iran conflict.

AI-generated deepfakes are flooding social media with fabricated footage of the Iran war at a scale never seen in any previous conflict. The New York Times has identified more than 110 unique deepfakes carrying pro-Iran messaging in just two weeks. Researchers say platforms can't keep up.
The fakes aren't subtle. AI-generated videos circulating on X show tearful American soldiers inside bombed-out embassies. Captured US troops kneeling beside Iranian flags. A destroyed US Navy fleet. Tel Aviv in ruins. Netanyahu dead. None of it happened.
The Scale Is Unprecedented
Every modern conflict has produced disinformation. But what's happening with the Iran war is different in kind, not just degree.
AFP Fact Check reports that the Middle East war "has unleashed an avalanche of AI-generated visuals, dwarfing anything seen in previous conflicts." The imagery is lifelike enough that ordinary social media users often can't tell what's real and what isn't.
Pro-Iran accounts push fabricated battlefield imagery, fake missile strike footage, and AI-generated Lego-style propaganda videos. Iranian state media has been described as using AI-generated content alongside inflated casualty figures. But it's not one-sided. All parties in the conflict are operating in an information environment where truth has become genuinely difficult to verify.
Independent verification is nearly impossible in many cases. Tehran imposed communications blackouts early in the conflict. The Pentagon's announcements about destroyed Iranian assets get amplified widely, but independent confirmation remains difficult.
X's Policy Shift — and Its Limits
Elon Musk's X announced last week that creators will be suspended from its revenue sharing programme for 90 days if they post AI-generated war videos without disclosing they're artificial. Repeat offenders face permanent bans.
It's a notable pivot for a platform heavily criticised for becoming a hub of disinformation since Musk's acquisition in 2022. The State Department called it a "great complement" to X's Community Notes system.
But disinformation researchers aren't convinced it's working.
"The feeds I monitor are still flooded with AI-generated content about the war," said Joe Bodnar of the Institute for Strategic Dialogue. "It doesn't seem like creators have been dissuaded from pushing misleading AI-generated images and videos about the conflict."
He pointed to monetised "blue check" accounts still sharing unlabelled AI clips. The financial incentive to go viral with dramatic war footage apparently outweighs the risk of a 90-day revenue suspension.
Deepfakes Beyond the Battlefield
The weaponisation of AI-generated content isn't limited to the Iran conflict.
In Germany, a legal gap has emerged around pornographic deepfakes. German broadcaster SWR reports there's currently no specific criminal statute covering AI-generated intimate imagery — a gap that victims of digital violence are pressing legislators to close.
Meanwhile, deepfake technology is quietly reshaping commercial life. Influencers now "appear" in advertisements without ever stepping onto a set. Celebrities "endorse" products remotely through digital likenesses. The same tools that create wartime propaganda power a growing industry of synthetic media in entertainment and marketing.
The Athens Times observes that "constant exposure to manipulated imagery may erode confidence in media, making shared agreement on even basic facts increasingly difficult." That erosion affects everything — not just war coverage, but politics, commerce, and personal trust.
Information Control Takes Many Forms
While AI deepfakes dominate headlines, governments are also weaponising information through more traditional means: censorship.
China's Cyberspace Administration (CAC) mandated in February 2026 that social media platforms censor content deemed as spreading "fear of marriage" or "anxiety about childbirth." The directive targets content that might discourage young Chinese people from starting families amid a continued population decline.
The policy's first visible enforcement came when authorities banned a Uyghur comedian's social media account. Her offence: joking about married life. She'd posted on Weibo that if she had a husband and kids while sick with a fever, she'd "have to lean against the wall to get up and cook for them."
It's a stark example of how information control operates at every scale — from fabricated war footage seen by millions to a single comedian's joke about domestic life.
Russia's Propaganda as Strategic Signalling
Russia's information warfare operates on yet another level. Prominent Russian TV propagandists have escalated rhetoric in early 2026, with host Vladimir Solovyov suggesting Russia should consider "special military operations" not just in Ukraine but in Armenia and Central Asian countries.
The statements triggered formal protests from Armenia, Uzbekistan, and Kyrgyzstan. Moscow's Foreign Ministry denied the comments reflected official policy but didn't repudiate the underlying logic.
Researchers at E-International Relations describe this as "envelope testing" — using media figures to probe whether foreign governments will protest, ignore, or realign in response. The propaganda serves multiple purposes: justifying the ongoing war in Ukraine, maintaining domestic unity by framing external threats as existential, and appeasing hardline nationalist factions.
With estimated Russian casualties from the Ukraine war potentially approaching 1.2 million (including roughly 325,000 killed), the pressure to amplify external threats through propaganda grows alongside the domestic cost of the conflict.
The Expanding Battlefield
India's Defence Minister Rajnath Singh captured the broader shift in a speech this weekend. "Nations can now be weakened not only through conventional war but also through economic, cyber, space and information warfare," he said.
He described modern national security as encompassing "economic, digital, energy and food security" — a recognition that the battlefield now extends into every screen, every feed, every algorithm.
That's the reality of information warfare in 2026. The tools are cheaper and more accessible than ever. The volume is overwhelming. The platforms are struggling. And the line between what happened and what was fabricated grows thinner every day.
Understanding these mechanisms doesn't require picking sides. It requires paying attention to how information reaches you, who created it, and what reaction it's designed to provoke.
Sources & Verification
Based on 5 sources from 5 regions
- AFP Fact CheckInternational
- Foreign PolicyNorth America
- Times of IsraelMiddle East
- Times of IndiaSouth Asia
- Athens TimesEurope
Keep Reading
AI Used in 27% of Foreign Disinformation Campaigns, EU 2026 Report Finds
The EU tracked 540 disinformation incidents across 10,500 channels in 2025. AI-generated text, audio, and video appeared in more than one in four. Russia ran 29% of attributed cases.
AI Deepfakes Are Running Both Sides of the Iran War
Iran war AI deepfakes hit 110+ confirmed fakes in two weeks. Both Tehran-linked and Israeli-backed networks are running operations. Here's how the mechanism works.
AI Wins at Spotting Fake Photos, Humans Win at Videos
A new University of Florida study reveals the detection split: AI crushes deepfake photos, but humans outperform machines at spotting fake videos.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.