Deepfake X-Rays Now Fool Doctors. Detection Is Losing.
AI-generated X-rays fool trained radiologists across six countries. Detection tools fail in real-world banking. State propaganda goes viral. Deepfakes crossed the quality threshold in three domains at once.

AI-generated X-rays now fool trained radiologists. A study published this week in Radiology tested 17 specialists across six countries on 264 images — half real, half synthetic — and found doctors couldn't reliably tell them apart. Meanwhile, the tools built to catch deepfakes in finance, healthcare, and social media are failing under real-world conditions. Detection is losing the arms race, and the consequences are spreading from hospital wards to bank vaults to battlefields.
The study, conducted by researchers at Mount Sinai's Icahn School of Medicine, gave radiologists two sets of images. One mixed genuine scans with ChatGPT-generated X-rays from multiple body parts. The other paired real chest X-rays with fakes from RoentGen, Stanford's open-source AI model.
The results were stark. When radiologists didn't know fakes were present, detection accuracy dropped sharply. Even when warned that AI-generated images were mixed in, specialists with decades of experience struggled.
"These deepfake X-rays are realistic enough to deceive radiologists, the most highly trained medical image specialists," said lead author Mickael Tordjman. He flagged two threats: fraudulent litigation using fabricated injuries, and cyberattacks that could inject synthetic scans into hospital systems to manipulate diagnoses.
This isn't hypothetical. It's the medical version of a problem already tearing through finance.
The Trust Tax on Every Transaction
Shufti, a verification company, warned this week that deepfake detection tools passing lab tests are failing in production. The gap between controlled benchmarks and messy real-world conditions — low bandwidth, cheap phone cameras, compressed uploads — strips away the fine textures detection models depend on.
Juniper Research projects synthetic identity fraud losses will hit $58.3 billion by 2030, a 153% increase from 2025. Shufti's CTO Frayam Asif put it bluntly: "The industry needs to stop treating lab accuracy as deployment readiness."
What he describes as a "trust tax" is already embedded in every digital transaction. Banks tighten verification thresholds. Manual review queues grow. False negatives climb. Customers get blocked. The friction is real, and it's getting worse.
Meanwhile in South Africa, Reality Defender just partnered with Certified AI Access to deploy enterprise-grade deepfake detection across the financial sector. It's a pattern: each country scrambling to patch its own defences while the attack surface grows globally.
90% AI Content by Year's End?
Resemble AI's new threat report, released this week, compiled 1,567 verified deepfake incidents from 2025 across 3,253 news stories. The numbers: nearly $1.3 billion in confirmed fraud losses. Twenty percent of incidents involved non-consensual intimate imagery or child sexual abuse material. The average corporate deepfake incident stayed in the news cycle for 3.5 years.
The company launched two free tools — a Chrome extension that scans images, video, and audio across major platforms, and an X bot for checking suspicious posts. Colour-coded badges: green for authentic, red for AI-generated, yellow for uncertain.
But here's the uncomfortable projection they cited: Europol estimates that up to 90% of online content could contain some form of AI-generated material by the end of 2026. Even if that figure is inflated, the direction is clear.
Resemble AI is treating this as a consumer problem now, not just an enterprise one. That's telling.
The Propaganda Front
While detection stumbles in hospitals and banks, AI-generated content is thriving as a weapon on social media.
Iran's "Explosive News Team" — a propaganda operation identified by 404 Media — has been producing AI-generated Lego-style videos depicting Trump and Netanyahu. The latest went viral this week across Instagram, Reddit, and Facebook, set to a catchy rap about American foreign policy. Snopes verified the videos as Iranian propaganda distributed across multiple platforms.
At the same time, the White House launched its own livestream app and posted stylised images of Trump that 404 Media described as "fashwave filtered." Two governments producing competing AI-generated content for the same audience — the American public — using the same cheap tools.
404 Media's analysis was pointed: Iran's content speaks to broad American anxieties about gas prices, economic instability, and an unpopular war. A Pew poll from March 25 found 61% of Americans disapprove of Trump's handling of the Iran conflict. The propaganda isn't creating discontent. It's riding it.
Russia, meanwhile, declared Oscar-winning filmmaker Pavel Talankin a "foreign agent" this week. His documentary Mr. Nobody Against Putin secretly filmed pro-war propaganda being taught in Russian schools. Talankin documented how students were exposed to systematic pro-war messaging — and won an Academy Award for it. Moscow's response wasn't to deny the footage. It was to designate the messenger.
How Different Regions See This
Here's where the perception gap opens wide.
Western outlets — ScienceDaily, Biometric Update, 404 Media — frame deepfakes primarily as a detection and security problem. The question is always: how do we catch them? How do we build better tools?
Middle Eastern outlets like Shafaq News frame the same technology differently. Their reporting centres on AI as a military tool reshaping the Iran-Israel-US conflict, emphasising algorithmic targeting systems like Israel's Gospel and Lavender platforms. The deepfake problem isn't about fraud — it's about who controls the information battlefield.
African coverage, like IT-Online in South Africa, focuses on institutional readiness. Can local financial systems defend themselves? The question isn't philosophical — it's about whether a bank in Johannesburg can tell a real customer from a synthetic one.
Latin American and South Asian outlets have been largely silent on this week's developments. The detection market is projected to reach $15.1 billion by 2035, but that money flows to companies in North America and Europe. The regions most vulnerable to deepfake-enabled fraud are the ones least likely to have access to detection tools.
The Pattern
Three things happened this week in the same 48-hour window. Doctors couldn't spot fake X-rays. Financial verification tools failed in production. State-backed propagandists produced competing AI content for the same audience.
These aren't separate stories. They're the same story. AI-generated content has crossed the quality threshold in medicine, finance, and information warfare simultaneously. Detection, in all three domains, is playing catch-up.
The question isn't whether deepfakes are getting better. That's settled. The question is what happens to institutions — hospitals, banks, democracies — that were built on the assumption you could trust what you see.
That assumption is gone. Nothing has replaced it yet.
Sources & Verification
Based on 5 sources from 4 regions
- ScienceDaily / Radiology (RSNA)North America
- Biometric UpdateInternational
- 404 MediaNorth America
- IT Brief Asia / Resemble AIAsia-Pacific
- Shafaq NewsMiddle East
Keep Reading
AI Deepfakes Flood the Iran War. What's Real?
Over 110 AI deepfakes with pro-Iran messaging identified in two weeks. How artificial intelligence is weaponising information in the 2026 Iran conflict.
AI Powers 27% of Disinformation Campaigns in 2026
The EU tracked 540 disinformation incidents across 10,500 channels in 2025. AI-generated text, audio, and video appeared in more than one in four. Russia ran 29% of attributed cases.
AI Wins at Spotting Fake Photos, Humans Win at Videos
A new University of Florida study reveals the detection split: AI crushes deepfake photos, but humans outperform machines at spotting fake videos.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.
Free · Daily · Unsubscribe anytime
🔒 We never share your email