Iran War AI Coverage: Which Deepfakes Get Reported?
English media covered 110+ Iranian deepfakes while barely mentioning Israel's PRISONBREAK AI campaign. Arabic outlets reported both. The coverage of disinformation has become its own form of disinformation.

The New York Times found 110 Iranian deepfakes in two weeks. The Foundation for Defense of Democracies called it "Iran's AI disinformation campaign." Trump accused Tehran of using AI as a "disinformation weapon." And every word of that is true.
Here's what's also true, but harder to find in English: Citizen Lab at the University of Toronto documented a coordinated Israeli-backed AI influence operation called PRISONBREAK, using 50+ fake accounts, AI-generated deepfakes of the Evin Prison bombing, and content timed to coincide with actual military strikes. The Pentagon confirmed it used Palantir's Maven AI with Anthropic's Claude to generate targets. Iran struck AWS data centres in the UAE partly because they housed the cloud infrastructure powering these systems.
Both sides are running AI propaganda. But only one side's propaganda gets covered as propaganda.
Two operations, two levels of scrutiny
The IRGC's deepfake operation is genuinely sloppy. Sixty-two fake accounts posed as Scottish independence supporters, Irish nationalists, and Latina women from Texas and California. One day they'd discuss UK politics. The next, they were calling Khamenei a martyr. As Clemson University's Media Forensics Hub researcher Darren Linvill put it to the Guardian: "To use those same assets to suddenly talk about how the supreme leader is a martyr seems a little inauthentic from a voice that's supposedly a 20-year-old girl in county Cork."
The posts got suspended. The story got covered. Extensively.
PRISONBREAK is a different animal. Citizen Lab's investigation found a network of more than 50 inauthentic X profiles, created in 2023, that went active in January 2025 — and then synchronised their output with Israel's June 2025 military operations against Iran. The network used AI-generated imagery, mimicked legitimate news outlets, and deployed deepfake video of the Evin Prison bombing to incite Iranian audiences to revolt.
Citizen Lab's assessment: "The hypothesis most consistent with the available evidence is that an unidentified agency of the Israeli government, or a sub-contractor working under its close supervision, is directly conducting the operation."
Arabic-language media covered both operations with roughly equal weight. English-language media didn't.
The numbers tell the story
Count the headlines. The FDD analysis — "Deepfakes on the Front Lines" — mentions Iran 47 times in its title and body. Israel appears only as a victim. NewsGuard's tracking focused on "Iranian state media." NPR, BBC, and Reuters all ran detailed breakdowns of Iranian-origin fakes.
PRISONBREAK got one Citizen Lab report, one CyberScoop article, one NPR mention buried in a broader piece. No dedicated FDD analysis. No NewsGuard tracking centre.
The Albis Perception Gap Index scored this story 8 out of 10, with the sharpest divergence between US and Middle Eastern outlets. US media frames the issue as "Iranian/IRGC deepfake operations threatening information integrity." Arabic media frames it as symmetrical — both sides deploying AI propaganda, neither side clean.
The gap isn't about facts. Both frames contain true information. The gap is about selection — which facts get headlines and which get footnotes.
The military AI nobody calls disinformation
There's a third layer that English-language coverage largely treats as a separate story, if it covers it at all.
Admiral Brad Cooper, CENTCOM chief, posted a video in early March boasting that the US military is "leveraging a variety of advanced AI tools" to "sift through vast amounts of data in seconds" during Operation Epic Fury. The system is Palantir's Maven Smart System, running Anthropic's Claude AI model. It generates and prioritises targets.
The Pentagon confirmed this. DefenseScoop, Moneycontrol, Democracy Now, and The Week all reported it. Democracy Now added a detail most others omitted: the Pentagon is investigating whether the AI system played a role in the US strike on an Iranian girls' school that killed up to 175 people.
Arabic-language outlets connected these dots. If the US uses AI to pick targets — and some of those targets turn out to be schools — is the AI system itself a form of information distortion? It doesn't generate fake images. It generates fake confidence in which buildings contain military targets.
English-language media covered Maven as a tech-and-defence story. Arabic media covered it as a war-crimes accountability story. Same AI, same war, different frame.
When Iran struck the data centres
On March 3, IRGC drones hit two AWS data centres in the UAE, impacting cloud availability zones ME-CENTRAL-1 and ME-SOUTH-1. Banking, payments, and consumer services across the Gulf went offline. The IRGC explicitly cited the data centres' role in supporting US military and intelligence networks.
English coverage: "Iranian attacks on Amazon data centres signal a new kind of war" (Fortune). The frame — Iran as aggressor, cloud infrastructure as innocent civilian target.
Arabic coverage: AWS hosts the cloud infrastructure that runs the AI targeting system that hit Iranian cities. The frame — data centres as military assets, not civilian infrastructure.
Both frames contain truth. Neither contains the whole truth. And the frame you saw first probably shaped which truth feels obvious.
Why the selective coverage matters
Researchers at Erkan's Field Diary called this the first "AI-native" conflict information environment. They're right, but not just because both sides use AI to generate propaganda. It's AI-native because the coverage of the propaganda is itself shaped by the same algorithmic and editorial selection pressures that shape everything else.
Here's the mechanism. A US-based think tank publishes a report on Iranian deepfakes. English-language outlets cover it. The report is factually accurate. The coverage is factually accurate. And the net effect is a perception that disinformation is something Iran does to the information environment, not something that emerges from every actor in it.
Meanwhile, Citizen Lab publishes an equally rigorous report on an Israeli-backed AI operation. It gets a fraction of the coverage. Not because editors are conspiring, but because the PRISONBREAK story doesn't fit the dominant narrative frame. Iranian deepfakes confirm what English-speaking audiences already believe: authoritarian states weaponise AI. Israeli influence operations complicate that frame. Complicated frames get less coverage.
The result: people who read only English know that Iran runs deepfakes. They don't know, with the same specificity and confidence, that Israel runs AI influence campaigns timed to kinetic strikes, or that the US military's own AI targeting system is under investigation for its role in civilian deaths.
What you can actually do
First, track the sourcing. When you see a story about war disinformation, check whether the analysis covers all parties or only one. The Erkan's Field Diary report applies its five-category typology — direct fabrication, strategic omission, narrative inflation, coordinated inauthentic behaviour, and meta-disinformation — to Iran, Israel, and the US equally. Most English-language analyses don't.
Second, notice the framing. "Iran's AI disinformation campaign" and "AI disinformation in the Iran war" sound similar. They aren't. The first names an aggressor. The second names an environment. The word order does work your conscious mind doesn't register.
Third, watch for meta-disinformation — the practice of labelling accurate reporting as "fake news" to discredit it. Trump accused Iran of using AI as a "disinformation weapon" while his own military was publicly boasting about using AI to select bombing targets. Both statements are verifiable. Together, they reveal a perception gap wider than any single deepfake.
This isn't a war where one side tells the truth and the other lies. It's a war where both sides use AI to manufacture reality — and the media covering the manufacture has picked a side in which fakes deserve scrutiny. The deepfake you don't hear about shapes your perception just as much as the one you do.
Sources & Verification
Based on 5 sources from 4 regions
- The GuardianEurope
- Citizen LabNorth America
- Erkan's Field DiaryInternational
- Democracy NowNorth America
- Foreign Affairs ForumMiddle East
Keep Reading
AI Deepfakes Are Running Both Sides of the Iran War
Iran war AI deepfakes hit 110+ confirmed fakes in two weeks. Both Tehran-linked and Israeli-backed networks are running operations. Here's how the mechanism works.
Iran War Deepfakes Hit 70M Views. The Real War Blurred.
The 2026 Iran conflict unleashed a flood of AI-generated combat footage. 70 million people watched fake missile strikes. Even a governor fell for it.
You Can't Tell What's Real Anymore. And That's the Point.
The Iran-Israel war is the first conflict where deepfakes flood feeds faster than fact-checkers can debunk them. Here's how the trust infrastructure of war reporting just broke.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.
Free · Daily · Unsubscribe anytime
🔒 We never share your email