YouTube Deepfake Detection Expands as Iran Runs Fake Accounts Across Three Platforms
YouTube expands deepfake detection to politicians while Iran runs 62 fake accounts across platforms. Information warfare reached a new threshold in 2026.

YouTube's rolling out deepfake detection for politicians and journalists. Iran's Revolutionary Guard is running 62 fake accounts across X, Instagram and Bluesky. Deepfakes reached a new threshold in 2026 — they're now nearly indistinguishable from reality.
These three developments aren't separate stories. They're snapshots of information warfare as it exists right now.
Detection Tech Reaches New Territory
YouTube announced March 10 that it's expanding its likeness detection technology to government officials, political candidates, and journalists. The tech identifies AI-generated faces and lets people request removal if content violates policy.
It launched last year for 4 million creators in the YouTube Partner Program. Now it's going political.
"This expansion is really about the integrity of the public conversation," said Leslie Miller, YouTube's VP of Government Affairs. "The risks of AI impersonation are particularly high for those in the civic space."
The tool requires users to verify identity with a selfie and government ID. They create a profile, view matches, request removal. Not every match gets removed. YouTube evaluates under its privacy policy to separate parody from harm.
The company won't say which politicians are testing it. Removals by creators have been "very small" so far. Politicians might see different numbers.
Iran's Troll Farm Pivots to War Content
Clemson University's Media Forensics Hub identified 62 fake accounts linked to Iran's Islamic Revolutionary Guard Corps. They posed as Scottish independence supporters, Irish nationalists, and Latina women from Texas and California.
The accounts built credibility posting local issues. Scottish accounts pushed independence. Irish accounts wanted reunification. Latina accounts railed against ICE and backed Venezuela's Maduro.
Then the U.S. and Israel struck Iran on February 28. Every account pivoted to war.
One fake user, "Ana Rodri" from California, posted ICE protest images in early February. After the strikes, she posted anti-war protests outside Trump Tower, anti-American cartoons, footage of a downed American pilot.
"Iran redirected its resources toward propaganda around the war, trying to make the war more painful for the United States," said Darren Linvill, who co-leads the Media Forensics Hub. "Clearly with the hope of shortening it."
The posts reached tens of millions. Individual engagement was modest — dozens of views, few comments — but researchers believe the network's larger than what they've found.
Most American-facing accounts on X got suspended by March 9. European accounts stayed live. Instagram and Bluesky took them down after Clemson's report.
Cognitive Manipulation at Scale
The World Economic Forum's Global Risks Report 2026 placed mis- and disinformation among the top short-term global risks. It's one of few risks severe over both two-year and ten-year horizons.
Deepfakes crossed a threshold in 2026. Earlier glitches disappeared. Anyone with a smartphone can make them.
Ireland's 2025 presidential election saw a deepfake video falsely showing the winner withdrawing. Fake footage of broadcasters "confirming" the news dropped days before polling. The Netherlands saw 400 AI-generated images attacking political candidates.
"Just knowing deepfakes exist can make us doubt things we read and see — even the truth," according to research cited in the WEF report.
Micro-targeting identifies people susceptible to emotional manipulation using self-reported online data. Once identified, targeted messaging is selected because it resonates emotionally and will likely be shared. This amplifies polarization and expands disinformation reach.
The Three-Layer Defense
Finland teaches grade school children to spot manipulative information. The curriculum asks six questions: who's communicating with me, how did they find me, what do they gain, can I verify their message, am I at risk if I embrace it, can I harm others if I share it.
Deepfake detection is improving through layered methods. Inconsistencies show up: mismatched noise patterns, color shifts, lip-sync errors. Pixel-level markings remain even in latest-generation fakes.
Distribution patterns help. Malicious deepfakes spread via bot networks. Account metadata reveals creation dates and posting rhythms.
The EU AI Act reflects this shift. Article 50 requires labeling of AI-generated content and disclosure of synthetic interactions. It's enforceable from August 2026 with fines up to 6% of global revenue.
What's Happening Here
YouTube's expanding detection to protect public figures. Iran's running fake accounts to shape war perception. The EU's mandating AI content labels.
These aren't solutions. They're responses to a mechanism that's already operating at scale.
Information warfare doesn't require explosives or soldiers. It requires understanding which emotional triggers move which audiences, then deploying content through networks designed to look authentic.
The effectiveness isn't measured in how many people believe a single deepfake. It's measured in cumulative doubt — whether people can trust anything they see, whether verification systems can keep pace, whether the cost of determining truth becomes too high.
"The cognitive impact of disinformation will define how societies evolve in the coming years," the WEF report states. "The year 2026 will test whether institutions, societies, and platforms can adapt fast enough."
Detection tech, fake account takedowns, and AI labeling requirements are the current defense layer. The question isn't whether they work. It's whether they scale faster than the manipulation methods they're trying to contain.
Sources & Verification
Based on 4 sources from 2 regions
- TechCrunchNorth America
- MS NOWNorth America
- World Economic ForumInternational
- Clemson University Media Forensics HubNorth America
Keep Reading
AI Wins at Spotting Fake Photos, Humans Win at Videos
A new University of Florida study reveals the detection split: AI crushes deepfake photos, but humans outperform machines at spotting fake videos.
When War Becomes Content: Information Warfare Goes Public
Governments are weaponizing information in plain sight, mixing real violence with video game footage, blocking domestic truth while projecting foreign lies, and using AI to create convincing fakes faster than detection systems can adapt.
When Your Prayer App Becomes a Weapon
A hacked prayer app sent propaganda to 5 million Iranians during airstrikes. Iran cut 90 million people offline for 280 hours. YouTube's racing to detect political deepfakes. Here's how information warfare actually works in 2026.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.