State of Information: March 22–29 War Week
Meta's own board says Community Notes can't stop AI disinformation. Iran runs the first AI war. Russia fabricates assassination videos in Hungary. Reuters confirms deepfakes now standard in US midterms. 900 notes vs 35 million labels — the enforcement gap is now measurable.

Meta's own Oversight Board says Community Notes can't replace fact-checking. EDMO calls Iran the first AI war. Russia fabricates assassination videos targeting Hungary's PM. Reuters confirms deepfakes are now standard in US midterm campaigns. The enforcement gap between AI-generated disinformation and platform response isn't theoretical — it's 900 notes versus 35 million labels.
This is the weekly information warfare report.
The Numbers That Define the Gap
Start here: 900 and 35 million.
Meta's Oversight Board published its assessment this week. In six months of the US rollout, Community Notes produced roughly 900 published notes. In the same period, professional fact-checkers in the EU labeled approximately 35 million posts across Facebook and Instagram.
Only 6% of proposed notes ever get published. The average delay between submission and publication runs 26 to 65 hours. By then, the content has already peaked.
On X, the picture is worse. Analysis of January 2021 through January 2025 data shows 87.7% of all Community Notes remained stuck in "Needs More Ratings" — never published at all. Meta's board warned the system is "vulnerable to manipulation" by the same AI tools it's supposed to combat. In certain countries, the board recommended Meta shouldn't introduce Community Notes at all.
The European Fact-Checking Standards Network welcomed the ruling. CNET called it Meta's week of bad news. IFCN Director Angie Drobnic Holan said professional fact-checking "enabled Meta to apply labels to approximately 35 million Facebook and Instagram posts" — a scale Community Notes can't approach.
This matters right now because the Iran war has flooded platforms with AI-generated content at a rate that makes 900 notes look like bringing a garden hose to a forest fire.
The First AI War Gets Its Name
EDMO made it official this week. The European Digital Media Observatory designated the Iran-Israel-US conflict as "the first AI war" — the first major conflict where generative AI produced more misinformation than traditional manipulation.
Cyabra, the Israeli social media analytics firm, documented the scale: 145 million views of Iranian-linked disinformation content in under two weeks. Not over months. Two weeks.
The Cipher Brief traced Iran's evolution: crude radio and print during the Iran-Iraq War. Sockpuppet accounts and recycled footage in the 2010s. Cotton Sandstorm hijacking streaming services with a deepfake newscast in December 2023. Full-scale wartime AI deployment from late February 2026 onward.
What's new this week: the New York Times reported on March 28 that Iran is waging "a sophisticated information war, aided by Russia and China." The NYT confirmed the operation produces "a steady torrent of propaganda, overstated narratives and outright disinformation" designed to exploit worldwide opposition to the US-Israeli military campaign.
France 24, citing Clemson University's Media Forensics Hub, revealed the pivot happened in 24 hours. IRGC-affiliated accounts that had spent months posting about Scottish independence and Irish politics switched overnight to war propaganda. Same accounts. Same infrastructure. New mission: target US anti-war sentiment as a strategic center of gravity.
NPR confirmed the mainstreaming has reached official channels. IRGC spokesman Zolfaghari now delivers Trump mockery in English. "Hey Trump, you are fired." Epstein references are weaponized as psychological hooks for Western audiences. Iran's state diplomacy has absorbed internet troll culture.
X: Iran's Uncontested Primary Channel
CEDMO and AFP analysis confirmed this week that X remains Iran's primary disinformation channel despite the March 3 revenue-sharing crackdown.
ISD researcher Joe Bodnar told AFP feeds are "still flooded with AI-generated content." A blue-check monetized account posted an AI clip depicting an Iranian "nuclear-capable" strike on Israel. It earned more views than Nikita Bier's crackdown announcement.
Here's the structural problem: Bier proposed a regional revenue-weighting system that would have starved foreign disinformation farms of income. Musk overruled it within hours. The Disinformation Observer's weekly analysis found Iranian actors "didn't bother diversifying" — they planted lies on X exclusively because the platform's incentive structure works in their favour.
NewsGuard flagged the False Claim of the Week on March 28: video game footage presented as Iranian missiles striking a US Navy ship in the Strait of Hormuz. It spread across multiple platforms.
Meanwhile, the DOJ seized four domains linked to Iran's Ministry of Intelligence — the Handala Hack network — used for death threats against journalists, stolen data publishing, and "faketivist" psychological operations. Unit 42 confirmed Handala Hack as "the most prominent Iranian persona" blending data theft with cyber warfare.
Domestically, Iran arrested 466 people for "online activities aimed at undermining national security" — the largest single security sweep since the war began.
The pattern is symmetric: fabricate outward, suppress inward.
Hungary: 15 Days to Election, Three Russian Networks Active
Russia's election interference in Hungary escalated from social media manipulation to something researchers haven't seen before.
The Matryoshka botnet — which had spent weeks amplifying existing anti-opposition content — shifted to proactive fabrication. It produced a video falsely attributed to Moldovan media urging Hungarians to "take up arms" and kill PM Viktor Orbán. Antibot4Navalny analysts told Politico this was the first time a Russian disinformation campaign invoked assassination narratives before events, suggesting the botnet is now coordinating directly with intelligence services.
This sits alongside the Washington Post's March 21 revelation: Russian operatives proposed "The Gamechanger" — staging an assassination attempt on Orbán to motivate his supporters after polls showed opposition leader Péter Magyar leading.
Three separate Russian-linked networks are now confirmed active simultaneously: Matryoshka (bot amplification and fabrication), Operation Overload (content seeding), and Storm-1516 (YouTube and X anti-Magyar campaigns). Euronews published an investigation confirming a pro-Kremlin network is impersonating its brand — creating fake articles attributed to Euronews to spread anti-Magyar content.
GRU officers remain physically present in Budapest.
The Hungarian government's response: opening an espionage case against Szabolcs Panyi, the investigative journalist who exposed these operations. The New York Times reported the case was opened 17 days before the election. The state is weaponising its counter-espionage apparatus not against the interference — but against the journalist documenting it.
The EU's rapid-response election disinformation system has 44 signatories. It's voluntary. Fifteen days remain.
Deepfakes Go Standard in US Midterms
Reuters published a major investigation on March 28: AI deepfakes are blurring reality in 2026 US midterm campaigns.
The National Republican Senatorial Committee created at least three deepfake ads this cycle. The most prominent targets Texas Senate candidate James Talarico — a fabricated video the NRSC defended by saying Democrats were "panicking after seeing and hearing Talarico's own words." His campaign called it a manipulated deepfake.
In Georgia, Republican Rep. Mike Collins created a deepfake of Democratic Sen. Jon Ossoff appearing to say: "I just voted to keep the government shut down. They say it would hurt farmers, but I wouldn't know."
In Massachusetts, Republican gubernatorial candidate Brian Shortsleeve produced an AI voice clone of Governor Maura Healey with no explicit disclosure. His campaign argued disclosure is only needed if the content "is not obvious to a reasonable viewer."
Purdue University professor Daniel Schiff, who has studied thousands of deepfakes, told Reuters: "The types of damage that we can do to the rigor and credibility of elections and democratic systems very much risks being supercharged."
Sen. Mark Warner sent letters to major social media companies and AI firms this week demanding faster action. He called it a "limited window" before manipulation becomes a "routine feature" of the midterm landscape.
Thirty-one states now have laws regulating political deepfakes. Nineteen don't. There's no federal framework.
Salon reported that 19 of 20 primary candidates backed by AI companies won their races. The industry funding the campaigns is the same industry whose tools enable the deepfakes. Nobody in Washington seems troubled by the circularity.
Russia Opens a New Front: The Baltic States
Latvia's Defence Ministry disclosed on March 27 that Russia is conducting a "large-scale information warfare campaign" against Latvia, Lithuania, and Estonia.
The false claim: Baltic states are allowing Ukrainian forces to use their territory for strikes against Russia.
The reality, per Defence News: Ukrainian drones targeting Russian Baltic Sea coast infrastructure were apparently diverted into NATO territory by Russian electronic warfare. Drones entered all three Baltic states this week. Russia's disinformation campaign then attributed these incursions to Baltic complicity.
It's a textbook combination: kinetic electronic warfare creates a physical fact (drones in Baltic airspace), then information warfare reframes that fact as evidence of Baltic aggression. The Latvian Defence Ministry issued a formal rebuttal. ISW assessed Russia is attempting to divert attention from its inability to defend against Ukrainian drone strikes on its own infrastructure.
The campaign uses social media bots targeting Russian-speaking audiences and "involving young people," according to the Latvian ministry. It opens a new geographic front in Russian information operations — running concurrently with Hungary, France (where municipal elections were held March 23), and continued Ukraine war narratives.
The UK Parliament Confrontation
The UK Commons Science, Innovation and Technology Committee held a combative two-hour hearing this week. The target: X, TikTok, and Meta.
Conservative MP George Freeman confronted X's representative with a specific case: a deepfake showing Freeman defecting to the Reform party. It had circulated without action from the platform. The X representative, sitting in the room with the victim of the deepfake, had no satisfactory answer.
TikTok was found still hosting instructions for using Grok to nudify minors. X claimed the platform is "politically agnostic" — a position the committee found unconvincing given Musk's public endorsement of Reform UK.
The committee warned that UK May elections could be "seriously disrupted." If platforms can't address a deepfake when the victim is literally in the room asking about it, the enforcement gap isn't a technical problem. It's a structural one.
VOA Weaponised as State Propaganda
Voice of America journalists filed a First Amendment lawsuit this week, alleging the US government's own international broadcaster has been turned into a wartime propaganda outlet.
The complaint: USAGM Acting CEO Michael Rigas and Kari Lake censored interviews, suppressed footage of anti-government protests within Iran, banned coverage of a key Iranian dissident, and suppressed reporting on US-caused casualties including a girls' school bombing on February 28.
NPR confirmed VOA's Persian service has been heavily promoting the Trump administration line on the war. The journalists' complaint describes the content as "Trump in the style of Kim Jong-Il."
The entity designed to counter foreign state propaganda is now producing it. Iran fabricates victory narratives outward while suppressing internal dissent. The US suppresses unfavourable war reporting through its own international broadcaster. The symmetry is exact.
What the PGI Shows
The Albis Perception Gap Index hit 5.88 on March 28 — its first daily measurement. The geopolitics tributary scored 7.02, firmly in "Competing Realities" territory.
The most divergent region pair: Middle East and US, at 7.69 across 13 stories. The most aligned: Asia-Pacific and EU, at 4.93.
The top story by perception gap: Israel's nuclear site strikes, scoring 8.13. US and EU media framed them as a deterrence necessity. Middle Eastern media framed them as a sovereignty violation. Farsi media reported pre-evacuation details and NPT withdrawal debate that appeared in zero English-language coverage.
Al Jazeera files every story under "US-Israel war on Iran." CNN uses "Iran war." Two words, one preposition, two different wars. That framing gap hasn't narrowed. The PGI suggests it's widening.
What Changed This Week
Last week's report documented the emergence. This week's documents the institutionalisation.
EDMO gave the AI war its name. Meta's own board quantified the enforcement failure. Reuters normalised deepfakes as standard campaign tools. Russia's botnets graduated from amplification to fabrication. The UK Parliament held a hearing where a deepfake victim couldn't get action from the platform that hosted the fake of him.
The wire fraud conviction reported by Disinformation Observer may matter most. A court found that AI-assisted bot fraud fits existing wire fraud statutes. That creates a legal template prosecutors can use the moment monetised influence operations touch revenue-sharing platforms. It's a lower evidentiary bar than espionage charges.
Deepfake legislation reached 31 states. The EU AI Act's Article 50 — mandatory AI content labelling with penalties up to 6% of global revenue — takes effect in August. China is compiling Taiwan election influence data. The PRC represents a third concurrent election interference theatre alongside Hungary and the US midterms.
The information environment didn't break this week. It was already broken. This week, the institutions responsible for defending it started saying so out loud.
The State of Information is published weekly. Previous edition: March 15–22.
Sources for this article are being documented. Albis is building transparent source tracking for every story.
Keep Reading
State of Information: Week of March 15–22, 2026
The Iran war's third week produced 110+ deepfakes, a staged assassination plot in Hungary, 92 million Iranians cut from the internet, and three leaders claiming victory in the same war on the same day. This is the weekly information warfare report.
War Fakes, Deepfakes, and Truth's Worst Week
AI-generated war footage floods social media, a US Senate committee weaponises deepfakes against its own candidates, Russia runs covert TikTok ops in Hungary, and the EU races to label synthetic content before it's too late. Your weekly briefing on information warfare.
AI Deepfakes Flood the Iran War. What's Real?
Over 110 AI deepfakes with pro-Iran messaging identified in two weeks. How artificial intelligence is weaponising information in the 2026 Iran conflict.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.
Free · Daily · Unsubscribe anytime
🔒 We never share your email