AI Wins at Spotting Fake Photos, Humans Win at Videos
A new University of Florida study reveals the detection split: AI crushes deepfake photos, but humans outperform machines at spotting fake videos.
AI programs can spot deepfake photos better than people. But when it comes to videos, humans win. A University of Florida study just published this split, testing how well machines and people detect synthetic media.
The researchers tested AI detection tools against human participants. For still images, the machines crushed it. High accuracy across the board. For videos, people consistently outperformed the algorithms.
This matters because deepfakes aren't slowing down. They're accelerating.
The Detection Arms Race
Companies like CloudSEK and Sensity AI are building forensic-grade detection tools. They analyze videos, images, and audio for signs of AI generation. The tech works by looking for artifacts — tiny inconsistencies that reveal synthetic origins.
But detection only works if it keeps pace with creation. And right now, creation's winning.
New deepfake tools drop monthly. Each one better than the last. Faces that move naturally. Voices that sound human. Video that passes casual inspection.
The University of Florida findings suggest we're fighting this wrong. We're building AI to catch AI, when humans might be better judges — at least for video.
How Platforms Handle Manipulation
Facebook and Twitter removed over 317,000 accounts between 2019 and 2020. That's accounts flagged for manipulation campaigns, coordinated inauthentic behavior, and disinformation operations.
The platforms use automated detection plus human review. Algorithms flag suspicious patterns. People make the final call.
But here's the problem: false information spreads faster than true information on social media. Multiple studies confirm this. The manipulation campaigns know it. They design for velocity, not accuracy.
Platform filtering creates another vulnerability. Manipulators don't need major influencers to spread content. They leverage how platforms surface and recommend posts. Small coordinated groups can amplify messages across networks.
The Surveillance Layer
While we watch deepfakes and disinformation, governments are watching us.
The UN Human Rights Office released a report warning about spyware like Pegasus. It turns smartphones into 24-hour surveillance devices. Everything on your phone becomes accessible. Then the phone itself becomes a tool to spy on your physical life.
This isn't theoretical. Multiple governments have deployed Pegasus against journalists, activists, and political opponents.
In the U.S., the Electronic Frontier Foundation is tracking automated license plate readers. ICE and other federal agencies use them for mass surveillance. The systems can track movement patterns across cities.
Biometric surveillance is expanding too. Facial recognition. Fingerprint scanning. The tech offers precision but raises questions about consent and privacy.
Information as a Weapon
Information warfare isn't new. What's new is the scale and speed.
Coordinated campaigns can flood platforms in hours. Deepfakes can undermine trust in authentic footage. Surveillance tools can identify and target specific individuals.
The mechanisms are well-documented. Studies show how evidence collages — combining real facts with misleading context — create believable disinformation. How platform algorithms can be gamed to amplify specific messages. How surveillance data can be weaponized against vulnerable populations.
Nearly $10 million was spent on political advertisements by cyber troops globally, even as platforms removed hundreds of thousands of accounts. Private firms increasingly run these manipulation campaigns.
The Human Element
The University of Florida study highlights something important. Humans can still spot manipulation in video better than AI can. We notice subtle wrongness. Unnatural movements. Timing that's slightly off.
But that advantage only holds if people are actually watching critically. Most of us scroll fast. We react, share, move on.
Training programs are emerging. Companies like KnowBe4 build deepfake awareness into security education. The idea: if employees can spot synthetic media, they're less likely to fall for it.
But awareness competes with velocity. Detection competes with creation. Privacy competes with surveillance.
What's Actually Happening
Tools exist to create convincing fakes. Tools exist to detect them. Both are improving.
Platforms remove millions of manipulative accounts. New ones appear daily.
Governments deploy sophisticated surveillance. Activists push for digital rights protections.
Information gets weaponized. People try to stay informed.
This isn't a story with a clear solution. It's a set of mechanisms playing out simultaneously. Understanding those mechanisms doesn't solve them. But it's where awareness starts.
The question isn't whether information warfare exists. It does. The question is whether enough people understand how it works to make informed decisions about what they're seeing, sharing, and trusting.
Right now, the answer's unclear. Detection tech helps. Platform policies help. Human judgment helps. None of it's enough on its own.
The arms race continues. Creation tools get better. Detection tools get better. Surveillance tools get better. Privacy protections lag behind.
That's where we are.
Keep Reading
Both Sides Are Right. Both Sides Are Lying. Welcome to Information Warfare.
When two superpowers accuse each other of exactly the same thing — and both have evidence — someone's lying. Or everyone is. This is the defining pattern of the decade.
The Media Literacy Vaccine That Backfires
UNESCO research shows deepfake exposure increases gullibility, not skepticism. Every 'spot the fake' quiz might be making the problem worse.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.