When Your Prayer App Becomes a Weapon
A hacked prayer app sent propaganda to 5 million Iranians during airstrikes. Iran cut 90 million people offline for 280 hours. YouTube's racing to detect political deepfakes. Here's how information warfare actually works in 2026.

Five million Iranians opened their prayer app on March 1st. They got propaganda instead of prayer times. The messages arrived in 30-minute bursts, timed to airstrikes. "Help has arrived," the first notification read.
The app was BadeSaba Calendar. It tells Muslims when to pray. Someone hacked it before the strikes, then waited.
This is information warfare in 2026. Not just fake news. Hacked apps. Internet kill switches. AI-generated politicians. The battlefield is your phone.
The Prayer App Attack
BadeSaba Calendar has been downloaded over 5 million times from Google Play. On the morning of March 1st, shortly after Israeli and U.S. strikes began in Iran, users received rapid-fire notifications. They didn't come from the Iranian government warning of danger. They came from their prayer app, seemingly hacked by foreign actors.
Security researcher Bruce Schneier analyzed the operation. "It happened so fast that this is most likely a government operation," he wrote. "I can easily envision both the U.S. and Israel having hacked the app previously, and then deciding that this is a good use of that access."
The timing was surgical. Messages synchronized with explosions. The infrastructure was already in place, dormant, waiting for activation.
No group has claimed responsibility.
Iran's Digital Blackout
Iran's response wasn't defensive systems or air-raid apps. It was a kill switch.
Since February 28th, Iranian authorities have enforced one of history's most comprehensive internet shutdowns. Connectivity dropped to 1% of normal levels. Ninety million people went offline. NetBlocks, which monitors global internet access, reports that Iranians spent over 40% of 2026 so far under internet shutdown — more than 280 hours of enforced silence.
But the shutdown isn't total. It's selective.
Iran operates a whitelist system. Most citizens can only access domestic internet. A small group of pre-approved users — those who can "convey the voice of the system to the world," according to government spokesperson Fatemeh Mohajerani — retain global access.
This isn't censorship. It's tiered reality. The state controls who speaks to the outside world.
Iranian mobile operators sent messages to users: sharing photos of bombing sites or connecting to the international internet will result in phone line suspension and referral to the judiciary. Using a VPN became a prosecutable offense. Authorities framed internet access as collaboration with the enemy.
When the Israeli Defense Forces issued evacuation warnings on social media, Iranians couldn't see them. The warnings fell into a digital void.
YouTube's Deepfake Defense
On March 10th, YouTube announced it's expanding AI deepfake detection tools to politicians, government officials, and journalists.
The technology launched last year for 4 million YouTube creators. It works like Content ID — YouTube's copyright system — but scans for simulated faces instead of copyrighted music. When it finds an AI-generated likeness, the impersonated person can request removal.
"This expansion is really about the integrity of the public conversation," said Leslie Miller, YouTube's VP of Government Affairs. "The risks of AI impersonation are particularly high for those in the civic space."
Not all matches get removed. YouTube evaluates requests under privacy guidelines. Parody and political critique are protected. Direct impersonation for manipulation isn't.
The company wouldn't confirm which politicians are testing the tool. But the timing signals urgency. AI-generated political deepfakes have moved from experimental to operational.
The Healthcare Front
Deepfakes aren't just political. They're financial.
On March 12th, health tech company Codoxo launched Deepfake Detection for U.S. health insurers. The tool identifies AI-generated or manipulated medical records submitted for claims.
The threat is real. Medical documentation can be fabricated with AI tools. Fake diagnoses. Synthetic lab results. Fraudulent treatment histories. Insurance companies are deploying detection systems before the fraud becomes systemic.
Information warfare isn't just elections and military strikes. It's every system that relies on trust in documentation.
Venezuela's Website Blockade
On March 14th, protesters gathered in Caracas demanding the interim government lift a blockade of around 200 websites, including major media outlets.
The pattern repeats. During political instability, governments restrict information access. Venezuela's method is different from Iran's kill switch — targeted blocking instead of total shutdown — but the objective is identical. Control the information environment. Limit what people see.
Critics call it censorship. Governments call it security. The technical mechanism is the same.
The Underground Economy
Iranians haven't surrendered. An underground network of "configs" — specially formatted connection files for VPN software like V2Ray and Xray — circulates through private Telegram channels.
These configs have short lifespans. Iran's deep packet inspection technology hunts for VPN signatures. Once detected, the config dies. Users buy new ones. The market is expensive and risky.
Some turn to Starlink. Owning the hardware carries arrest risk.
State-linked Telegram channels now encourage citizens to report people sharing bombing photos or accessing the global internet. When someone posts a video of an airstrike, these groups analyze metadata and visual details to identify the photographer's location. Their information gets published. They're labeled collaborators.
Chief Justice Gholam-Hossein Mohseni-Ejei warned of "no leniency." State television discussed punishments ranging from property confiscation to death for media actions that "damage national security."
The New Battlefield
The hacked prayer app. The internet kill switch. The deepfake politician. The fake medical record.
These aren't separate stories. They're the same battlefield.
Platforms are infrastructure. Apps are access points. Governments control both. When war comes, the first strike isn't always kinetic. It's informational.
Five million Iranians thought they were opening a prayer app. They got psychological operations instead. The app worked exactly as designed — just not by its original creators.
That's information warfare in 2026. The weapon is already in your hand. Someone else is pulling the trigger.
Sources & Verification
Based on 4 sources from 3 regions
- Schneier on SecurityNorth America
- Index on CensorshipInternational
- TechCrunchNorth America
- Manila TimesLatin America
Keep Reading
AI Wins at Spotting Fake Photos, Humans Win at Videos
A new University of Florida study reveals the detection split: AI crushes deepfake photos, but humans outperform machines at spotting fake videos.
When War Becomes Content: Information Warfare Goes Public
Governments are weaponizing information in plain sight, mixing real violence with video game footage, blocking domestic truth while projecting foreign lies, and using AI to create convincing fakes faster than detection systems can adapt.
Deepfakes Just Broke Identity Verification
Deepfakes can now fool the systems banks and apps use to verify you're real. The tools built to catch fake videos are failing.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.