Censorship Deals, Deepfake Hires, Mass Surveillance
A landmark US censorship settlement, deepfake job candidates, Canada expanding police surveillance powers, and Nigeria buying mass AI monitoring. This is how information gets weaponised right now.

Governments, corporations, and individuals are weaponising information through censorship settlements, deepfake job candidates, expanded surveillance laws, and AI-powered mass monitoring. Five countries, 48 hours.
The US government just agreed to stop pressuring social media censorship
The Missouri v. Biden lawsuit settled on March 24. The New Civil Liberties Alliance secured a consent decree barring three federal agencies — the Surgeon General's office, the CDC, and CISA — from coercing social media platforms to suppress constitutionally protected speech.
The case began during COVID-era content moderation. Government agencies pressured Facebook, Instagram, X, LinkedIn, and YouTube to remove posts the administration disagreed with. The consent decree bars these agencies from threatening platforms into removing content or directing content moderation choices.
The case previously reached the Supreme Court as Murthy v. Missouri. The Court vacated an earlier injunction on standing grounds. The settlement bypasses that ruling.
The mechanism matters. The government didn't pass a censorship law. It used informal pressure — meetings, emails, implied threats — to achieve the same result. The settlement names that tactic and bans it.
Deepfake candidates are showing up to job interviews
In Bengaluru, AI interview platform InCruiter caught a deepfake candidate during a live automated interview for a global fintech client.
The applicant looked normal. Answered technical questions naturally. But InCruiter's detection system flagged subtle visual anomalies humans miss. The person on screen wasn't real — an AI-generated avatar overlaid in real time, replicating someone else's face and voice. The goal: pass screening and land a role at a company handling sensitive financial data.
The broader picture: InCruiter's system flags fraud in 25–30% of suspicious sessions. Nearly double what human interviewers catch. Industry-wide, cheating in online interviews runs at 10–15%.
Separately, deepfake X-rays are fooling radiologists. Only seven of 17 could identify AI-generated medical images — even when told fakes were present.
Canada wants police to access your digital life faster
Canada introduced Bill C-22, the Lawful Access Act, on March 12. It expands police and intelligence powers to access personal digital information — in many cases, without the safeguards Canadians expect.
Three parts. Part 1 amends the Criminal Code and CSIS Act to speed digital data collection. Part 2 forces telecom and electronic service companies to build and maintain government-accessible surveillance infrastructure. Part 3 promises parliamentary review — after three years.
The key change: the legal threshold. Canadian law traditionally requires "reasonable grounds to believe" before police can compel data handover. Bill C-22 lowers this to "reasonable grounds to suspect." Suspicion requires less evidence than belief.
New preservation orders also force companies to hold your data for police access before charges are filed.
The government calls it modernisation. The mechanism — lowering legal standards and compelling companies to build surveillance infrastructure — goes further than closing gaps.
Nigeria is Africa's biggest surveillance buyer
Nigeria has spent $470 million on smart city surveillance tech, making it Africa's largest buyer of AI-powered monitoring.
The centrepiece: the National Public Security Communication System. Financed 15% by Nigeria, 85% through a $399 million China Eximbank loan. ZTE and Hikvision won the contracts.
Lagos alone has deployed roughly 23,000 CCTV cameras, many with facial recognition. Oyo State partnered with Huawei and Hikvision for similar systems.
The stated goal: public safety. But Nigeria has no legislation regulating large-scale surveillance. No mandatory human rights assessments before deployment. A $470 million system in a legal vacuum.
Systems installed as early as 2013 went offline or fell into disrepair. Allegations of mismanagement and fund misappropriation have surfaced. The pattern is global: governments build surveillance infrastructure before building the legal frameworks to govern it.
The EU sanctioned four Russian disinformation operatives
The EU added four individuals to its sanctions list for spreading Russian propaganda — including a former military serviceman and a freelance journalist accused of war crimes.
The approach differs from the US model. Missouri v. Biden focused on what government can't do to platforms. The EU targets the individuals producing and distributing state-backed disinformation.
Electronic warfare is reshaping the Iran conflict
The Iran war isn't just missiles and drones. India's Observer Research Foundation documents how the electromagnetic spectrum became a primary battleground.
Before Operation Epic Fury's first aircraft entered Iranian airspace on February 28, Iran's electronic environment had already been dismantled. Radars blinded. Command links severed. Communications networks down.
Iranian hacker group Handala responded with a cyberattack on Stryker Corporation, a major US medical tech firm — disrupting global operations and exfiltrating large volumes of data.
Kinetic strikes, electronic warfare, and cyber operations running simultaneously. The information domain isn't separate from the physical battlefield. It's where wars are decided before the first visible shot.
What connects all of this
Five countries. One thread: information itself is the territory being fought over.
In the US, who controls social media content. In Canada, who accesses private communications. In Nigeria, cameras and facial recognition without legal guardrails. In Europe, holding propagandists individually accountable. In the Iran conflict, blinding a nation's information infrastructure before the bombs arrive.
Different mechanisms. Informal pressure. Legislative expansion. Mass procurement. Targeted sanctions. Spectrum dominance. Same dynamic: control the information environment, control the outcome — whether that's an election, a hiring process, a criminal investigation, or a war.
Understanding these mechanisms doesn't require picking sides. It requires watching how they work.
Sources & Verification
Based on 5 sources from 3 regions
- GlobeNewsWire / NCLANorth America
- Analytics InsightSouth Asia
- Kyla Lee / Canadian LawNorth America
- TechAfrica NewsAfrica
- ORF OnlineSouth Asia
Keep Reading
AI Deepfakes Flood the Iran War. What's Real?
Over 110 AI deepfakes with pro-Iran messaging identified in two weeks. How artificial intelligence is weaponising information in the 2026 Iran conflict.
AI Powers 27% of Disinformation Campaigns in 2026
The EU tracked 540 disinformation incidents across 10,500 channels in 2025. AI-generated text, audio, and video appeared in more than one in four. Russia ran 29% of attributed cases.
AI Deepfakes Are Running Both Sides of the Iran War
Iran war AI deepfakes hit 110+ confirmed fakes in two weeks. Both Tehran-linked and Israeli-backed networks are running operations. Here's how the mechanism works.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.
Free · Daily · Unsubscribe anytime
🔒 We never share your email