Censorship Deals, Deepfake Hires, Mass Surveillance
A landmark US censorship settlement, deepfake job candidates, Canada expanding police surveillance powers, and Nigeria buying mass AI monitoring. This is how information gets weaponised right now.

Governments, corporations, and individuals are all weaponising information right now — through censorship settlements, deepfake job candidates, expanded surveillance laws, and AI-powered mass monitoring. Here's what happened in the past 48 hours across five countries.
The US government just agreed to stop pressuring social media censorship
The Missouri v. Biden lawsuit reached a historic settlement on March 24, 2026. The New Civil Liberties Alliance secured a consent decree that bars three federal agencies — the Surgeon General's office, the CDC, and CISA — from coercing social media platforms to suppress constitutionally protected speech.
The case began during the Biden administration's COVID-era content moderation push. Government agencies pressured Facebook, Instagram, X, LinkedIn, and YouTube to remove posts the administration disagreed with. Critics described it as a "whole of government" censorship operation.
The consent decree does something specific. It prohibits these agencies from threatening social media companies into removing content. It also bars them from directing or vetoing content moderation choices.
This case previously reached the Supreme Court as Murthy v. Missouri. The Court vacated an earlier injunction on standing grounds. The settlement bypasses that ruling entirely.
The mechanism matters here. The government didn't pass a censorship law. It used informal pressure — meetings, emails, implied threats — to achieve the same result. The settlement acknowledges this tactic and specifically prohibits it going forward.
Deepfake candidates are showing up to job interviews
In Bengaluru, an AI interview platform called InCruiter caught a deepfake candidate during a live automated interview for a global fintech client.
The applicant appeared normal at first. They answered technical questions naturally. But InCruiter's continuous detection system flagged subtle visual anomalies — the kind humans miss.
The person on screen wasn't real. An AI-generated avatar had been overlaid onto the video feed, replicating someone else's face and voice in real time. The goal: pass the automated screening and land a role at a company handling sensitive financial data.
InCruiter's data paints a broader picture. Their deepfake detection system flags fraudulent activity in 25 to 30 percent of suspicious interview sessions. That's nearly double what human interviewers catch. Across the industry, cheating in online interviews runs at 10 to 15 percent.
Separately, deepfake X-rays are now fooling radiologists. A study published this week found that only seven of 17 radiologists could identify AI-generated medical images — even when told fakes were present.
The same technology that powers entertainment and creative tools is being repurposed for fraud, identity theft, and infiltration.
Canada wants police to access your digital life faster
On March 12, 2026, the Canadian government introduced Bill C-22, the Lawful Access Act. It expands police and intelligence agency powers to access personal digital information. In many cases, without the safeguards Canadians are used to.
The bill has three parts. Part 1 amends the Criminal Code and the Canadian Security Intelligence Service Act to speed up digital data collection during investigations. Part 2 creates a new law requiring telecom and electronic service companies to build and maintain technical capabilities for government data access. Part 3 promises a parliamentary review — after three years.
The most significant change is the legal threshold. Traditional Canadian law requires "reasonable grounds to believe" before police can compel someone to hand over data. Bill C-22 lowers this to "reasonable grounds to suspect." That's a meaningful difference. Suspicion requires less evidence than belief.
The bill also creates new preservation orders that force companies to keep your data available for police access, even before charges are filed.
The government frames this as modernisation. Digital communications have outpaced existing laws. But the mechanism chosen — lowering legal standards and compelling private companies to build surveillance infrastructure — goes further than closing gaps.
Nigeria is Africa's biggest surveillance buyer
Nigeria has spent over $470 million on smart city surveillance technology, making it Africa's largest buyer of AI-powered monitoring systems.
At the centre is the National Public Security Communication System. It's a joint project financed 15 percent by the Nigerian government and 85 percent through a China Eximbank loan — roughly $399 million in external borrowing. Chinese firms ZTE Corporation and Hikvision won the contracts.
Lagos State alone has deployed approximately 23,000 CCTV cameras, many equipped with facial recognition. Oyo State partnered with Huawei and Hikvision for similar systems.
The stated goal is public safety and crime reduction. But Nigeria has no specific legislation regulating large-scale surveillance. There are no mandatory human rights impact assessments before deployment. Privacy advocates point to a $470 million system operating in a legal vacuum.
The project has also faced implementation problems. Systems installed as early as 2013 went offline or fell into disrepair. Allegations of mismanagement and possible fund misappropriation have surfaced.
The pattern is familiar globally. Governments invest heavily in surveillance infrastructure before building the legal frameworks to govern it.
The EU sanctioned four Russian disinformation operatives
The European Union added four individuals to its sanctions list for spreading Russian propaganda. Among them: a former military serviceman and a freelance journalist accused of war crimes.
These sanctions sit within a broader EU effort to hold specific people accountable for information operations, rather than just targeting platforms or content.
The approach differs from the US model. Where the Missouri v. Biden case focused on what the government can't do to platforms, the EU is targeting the individuals producing and distributing state-backed disinformation.
Electronic warfare is reshaping the Iran conflict
The US-Israel war on Iran isn't just missiles and drones. An analysis from India's Observer Research Foundation documents how the electromagnetic spectrum became a primary battleground.
Before Operation Epic Fury's first aircraft entered Iranian airspace on February 28, 2026, Iran's electronic environment had already been dismantled. Radars blinded. Command-and-control links severed. Communications networks taken down.
The Iranian hacker group Handala responded with a cyberattack on Stryker Corporation, a major US medical technology firm. The attack disrupted global operations and exfiltrated large volumes of data.
This convergence — kinetic strikes, electronic warfare, and cyber operations happening simultaneously — represents something military theorists predicted but hadn't seen at this scale. The information domain isn't separate from the physical battlefield. It's increasingly where wars are decided before the first visible shot.
What connects all of this
These stories span five countries and multiple continents. The common thread: information itself has become the territory being fought over.
In the US, the battle is over who controls what appears on social media. In Canada, it's about who can access private digital communications. In Nigeria, it's cameras and facial recognition deployed without legal guardrails. In Europe, it's holding propagandists individually accountable. In the Iran conflict, it's blinding an entire nation's information infrastructure before the bombs arrive.
Each case involves a different mechanism. Informal government pressure. Legislative expansion of surveillance powers. Mass procurement of monitoring technology. Targeted sanctions. Electronic spectrum dominance.
But the underlying dynamic is the same. Control the information environment, and you control the outcome — whether that's an election, a hiring process, a criminal investigation, or a war.
Understanding these mechanisms doesn't require picking sides. It requires paying attention to how they work.
Sources & Verification
Based on 5 sources from 3 regions
- GlobeNewsWire / NCLANorth America
- Analytics InsightSouth Asia
- Kyla Lee / Canadian LawNorth America
- TechAfrica NewsAfrica
- ORF OnlineSouth Asia
Keep Reading
AI Deepfakes Flood the Iran War. What's Real?
Over 110 AI deepfakes with pro-Iran messaging identified in two weeks. How artificial intelligence is weaponising information in the 2026 Iran conflict.
AI Powers 27% of Disinformation Campaigns in 2026
The EU tracked 540 disinformation incidents across 10,500 channels in 2025. AI-generated text, audio, and video appeared in more than one in four. Russia ran 29% of attributed cases.
AI Deepfakes Are Running Both Sides of the Iran War
Iran war AI deepfakes hit 110+ confirmed fakes in two weeks. Both Tehran-linked and Israeli-backed networks are running operations. Here's how the mechanism works.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.
Free · Daily · Unsubscribe anytime
🔒 We never share your email