The Media Literacy Vaccine That Backfires
UNESCO research shows deepfake exposure increases gullibility, not skepticism. Every 'spot the fake' quiz might be making the problem worse.
Exposure to deepfakes doesn't make you better at spotting them. It makes you worse.
That's the conclusion from UNESCO research published this week. The findings gut the assumption behind every "spot the deepfake" quiz and media literacy campaign: that seeing fakes builds immunity.
It doesn't. It builds vulnerability.
The 0.1% Problem
iProov tested 2,000 people in the US and UK. They showed them real and fake images and videos. Participants were primed — told to look for deepfakes.
Only 0.1% got them all right.
That's not a typo. Out of 2,000 people actively trying to detect fakes, two people succeeded.
The rest? Wrong more often than random guessing.
Familiarity Breeds Belief
The mechanism is called the Illusory Truth Effect. It's simple: your brain interprets familiarity as truth.
See something once? Skeptical.
See it three times? Believable.
See it ten times? Probably true.
A study across eight countries confirmed it. Repeated exposure to deepfakes increases the likelihood you'll believe their claims — regardless of accuracy.
UNESCO's Dr Nadia Naffi puts it plainly: "We're approaching a threshold of synthetic reality, beyond which humans can no longer distinguish real from fake without digital technology."
The Education Paradox
Media literacy programs assume exposure is training. Show people deepfakes, teach them the tells, build their defenses.
But research shows the opposite. Deepfake exposure can manipulate beliefs, enhance misinformation credibility, and induce false memories.
Social media makes it worse. Platforms amplify the illusory truth effect through algorithmic repetition. You don't just see a deepfake once — you see it dozens of times across feeds, retweets, and shares.
Each view makes it feel more real.
What Actually Works
Some media literacy interventions do help. Teaching specific detection techniques — not just warnings — improved accuracy in controlled studies.
Video tutorials showing how to spot artifacts (weird lighting, unnatural blinking, lip-sync errors) beat generic "be careful" messages.
But here's the catch: deepfakes are improving faster than human detection skills. What worked six months ago doesn't work today.
The Real Solution
Digital watermarking. AI-generated content markers embedded during creation. Detection algorithms that scan faster than humans can watch.
In short: technology to fight technology.
The uncomfortable truth is we can't educate our way out of this. Human brains aren't designed to spot synthetic media at scale.
Every "can you spot the deepfake?" quiz is training your brain to accept fakes as normal. The more you see, the less you trust your own judgment.
That's not media literacy. That's desensitization.
UNESCO's report argues for three things: mandatory labeling of AI-generated content, platform responsibility for detection, and legal frameworks that hold creators accountable.
None of those require teaching humans to be superhuman.
What It Means
The deepfake problem isn't about smarter consumers. It's about better infrastructure.
You don't teach people to spot counterfeit money by showing them fakes. You make currency harder to forge and easier to verify.
Same principle applies here.
The next time you see a "test your deepfake detection skills" quiz, remember: the test itself might be part of the problem.
Keep Reading
The Bank That Survived 2008 Just Started Holding Bitcoin. The Revolution Is Over.
Citigroup's launching Bitcoin custody this year. When the establishment absorbs what was designed to replace it, who won?
Russia Just Blew Up a NATO Railway and You Probably Didn't Hear About It
145 sabotage attacks in two years. The dangerous part isn't the attacks—it's that they've stopped being news.
The US Just Locked India Into Its AI Ecosystem. Did Anyone Notice?
While the world watched Iran, the US signed a treaty binding India to American AI infrastructure. India gets chips and investment. The US gets to decide who India shares AI with. Partnership or digital colonialism?
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.