You Can't Tell What's Real Anymore. And That's the Point.
The Iran-Israel war is the first conflict where deepfakes flood feeds faster than fact-checkers can debunk them. Here's how the trust infrastructure of war reporting just broke.
Two fake videos hit TikTok's top 15 Iran videos in one week. Three others racked up 100 million views across platforms. One showed "the moment an Iranian missile struck Tel Aviv" — posted by Tehran Times as real footage. It was Google's Veo 3 AI, 8-second clips stitched together, watermark visible at the bottom.
Nobody noticed until it went viral.
The Iran-Israel war isn't just fought with missiles. It's fought with pixels. And the second front — the one flooding your feed with AI-generated footage indistinguishable from real — might matter more than the first.
This is the first major conflict where both sides, plus random actors, are weaponizing deepfakes at scale. Not "deepfakes exist" as a threat. They're here. They're working. And you can't tell the difference.
The Mechanics: Who's Making Them and How
Tehran Times didn't hide the watermark. They just bet nobody would look.
The video showed missiles striking Tel Aviv. Eight-second clips — the default length for Google's Veo 3 AI generator. Warped doorframes. Body parts flickering. Unnatural reactions to explosions. All the tells were there.
It still went viral.
Iranian state outlets like Fars News and Nour News published a compilation called "Tel Aviv Before and After the War with Iran." Nearly every frame was AI-generated. A digital mirage passed off as news.
But it's not just state propaganda. Creators on X, TikTok, and Instagram are generating fake combat footage because it gets views. And views mean money.
X's revenue-sharing program incentivized engagement. War content engages. AI tools make war content easy. The math wrote itself.
Microsoft tracked 200+ disinformation incidents using AI-generated content between 2024 and 2025. More than double the previous years. That was before the Iran war.
Now it's everywhere.
Why Detection Fails at Scale
Here's the problem: detection works. Just not fast enough.
X's Nikita Bier commented under a viral war video, showing it was 99.9% AI-generated according to a detection tool. Sora 2 pinned as the source at 98.9% confidence.
The video already had millions of views.
Detection is reactive, not proactive. By the time the tools flag a fake, it's already done its job. The damage happens in the first hours — before fact-checkers arrive, before platforms act, before anyone knows to look.
AI detection tools spot the tells: warped objects, inconsistent lighting, frame rate glitches, unnatural movement. They're getting better.
But generation tools are getting better faster.
The Albis Global Awareness Index scored this story 5.45 — Selective Visibility. Only 3 out of 7 regions covered deepfake proliferation in the Iran conflict. That means 4.71 billion people (75% of the world) are seeing war footage with zero awareness that half of it might be fake.
How You Personally Can't Tell What's Real
Slow a video to 0.25x speed. Watch closely. Can you spot the warped doorframe? The shadow that doesn't match the light source? The person who doesn't blink for 12 seconds straight?
Maybe. If you're looking. If you have time. If you're skeptical.
Most people don't do any of those things. They scroll. They react. They share.
A UVU study found participants rated deepfakes as just as trustworthy — sometimes more trustworthy — than authentic content. The tells are there, but your brain fills in the gaps. You see what you expect to see.
Here's what you can look for:
- Eight-second clips. Veo 3's default. If a "war video" is exactly 8 seconds or a stitched series of 8-second clips, question it.
- Unnatural reactions. People flinch before explosions in real footage. AI-generated people often don't react at all — or react too perfectly.
- Warped objects. Doorframes that bend. Walls that ripple. Reflections that don't match.
- Watermarks. Check the bottom corners. Veo, Sora, other generators often leave faint marks.
- Cross-reference. If you can't find the same event reported by independent sources or satellite imagery, pause before sharing.
But even if you do all that, you're one person. The flood is bigger than you.
The Trust Infrastructure Just Broke
Journalism relies on trust. You trust that what a reporter shows you is real. You trust that editors verified it. You trust that the outlet's reputation is on the line.
Deepfakes break that chain.
When the public becomes sensitized to the threat, they don't just distrust fakes. They distrust everything. Surveys already show declining trust in news overall — not because journalism got worse, but because people can't tell what's real anymore.
Journalists are getting more wary too. Publish fast-breaking footage from a conflict zone and you risk broadcasting a deepfake. Wait to verify and you're too slow — the story already spread without you.
Foreign Affairs warned in 2018 that deepfakes would make journalists "more wary about relying on, let alone publishing, audio or video of fast-breaking events." That prediction aged well.
The Iran conflict is proving it in real time.
What Platforms Are Trying (And Why It's Not Enough)
X just introduced a new policy: post AI-generated war content without labeling it, you lose revenue sharing for 90 days.
It's something. But it's a bandaid on a gunshot wound.
The policy only covers "armed conflict" content. It's unclear if it applies to protests, civil unrest, or historical footage. X hasn't shared how violations will be detected at scale, how fast reports get processed, or what the appeals process looks like.
India ordered platforms to take down deepfakes within 3 hours of discovery — down from a 36-hour window. Meta and TikTok rolled out AI detection and labeling tools.
The results are mixed. CNBC identified some misleading TikTok videos on Venezuela that were labeled as AI-generated. Others that appeared fabricated had no warnings at all.
Detection is improving. But generation is improving faster. And the incentive structure — views = money, war = views, AI = easy war content — hasn't changed.
The Part Nobody Wants to Say
This is the first war where the information about the war may matter more than the war itself.
If people can't tell what's real, they can't form accurate views. If they can't form accurate views, they fall back on bias, narrative, and whatever version confirms what they already believe.
That's not journalism. That's propaganda dressed up as crowdsourced truth.
The tools to create convincing fakes are free. The tools to detect them are expensive, slow, and reactive. The platforms make money either way. And the public — the 75% of the world not even aware this is happening — scrolls through it all thinking they're watching reality.
You can't fence what you can't see. You can't moderate what moves faster than you can flag it. You can't trust what you can't verify.
The trust infrastructure of war reporting just broke. And the Iran-Israel conflict is showing us what happens next.
Where This Goes
Three paths forward:
Path 1: Detection catches up. AI tools get fast enough to flag fakes before they go viral. Platforms enforce labeling. Users learn to spot the tells. Trust rebuilds slowly. Path 2: We adapt. People assume all fast-breaking footage is fake until proven otherwise. Journalism shifts to satellite imagery, on-the-ground verification, multi-source corroboration. Slower, but trustworthy. Path 3: Nothing changes. Fakes keep spreading. Platforms profit. Trust erodes further. People believe whatever fits their narrative. Information warfare becomes the default state.Right now, we're drifting between paths 2 and 3. Detection isn't catching up fast enough for path 1. And nobody's investing in the infrastructure to make path 2 scalable.
The Iran war is showing us the future of conflict. Not just missiles and strikes — but a battlefield where reality itself is contested, frame by frame, view by view, until nobody knows what's real anymore.
And that's the point.
Sources for this article are being documented. Albis is building transparent source tracking for every story.
Keep Reading
A Fake War Got 70 Million Views. The Real One's Hard Enough to Follow.
AI-generated battle footage, video game clips shared as combat, and doctored satellite images are flooding social media during the Iran conflict. A Texas governor fell for it. So did millions of others.
Israel Strikes Iran's Nuclear Heart. This Isn't About Shipping Lanes Anymore.
The attack on Natanz crosses a threshold — from disrupting trade to attempting to eliminate Iran's strategic deterrent. The war just changed categories.
The US Just Stopped Paying for Internet Freedom. Right When It Matters Most.
US funding for tools that helped Iranian protesters and Chinese dissidents bypass censorship has been gutted—just as internet shutdowns spike globally. Here's why that matters.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.