State of Information: War Fakes, Party Deepfakes, and the Week Truth Got Harder to Find
AI-generated war footage floods social media, a US Senate committee weaponises deepfakes against its own candidates, Russia runs covert TikTok ops in Hungary, and the EU races to label synthetic content before it's too late. Your weekly briefing on information warfare.

Fake explosions went viral this week. Real ones killed people.
The Iran war is now two weeks old. And the information war around it has become almost as chaotic as the conflict itself. AI-generated videos depicting fake missile strikes, fabricated troop movements, and invented casualties have racked up tens of millions of views across social media, according to CNN reporting on March 11.
Meanwhile, an official US Senate committee used a deepfake to attack its own country's democratic candidate. YouTube expanded its AI detection tools to politicians. Russia launched a covert TikTok operation targeting Hungarian elections. And the EU scrambled to finalize rules for labeling synthetic content before it drowns everything else out.
This was the week information warfare went mainstream. Here's what happened.
The Iran War Fake Factory
The New York Times published an investigation on March 14 documenting what it called a "cascade of AI fakes" about the Iran conflict. The findings are stark.
Some fakes tried to deceive. Old footage from other conflicts was repackaged as current. Video game clips circulated as real combat footage. AI-generated images showed attacks that never happened.
But the Times found something else too. Dozens of AI-generated videos made no effort to hide their artificial origin. They functioned as a new form of digital propaganda -- bringing political arguments to life visually, arguments typically made by governments or their propaganda arms.
Both sides are doing it. The Associated Press reported on March 10 that state actors are behind much of the visual misinformation. Iran-linked networks on X, Instagram, and Bluesky have been seeding pro-Tehran content since the war began, according to reporting from MS Now. Operation Overload, a Russia-aligned campaign tracked by Albis since early 2026, has escalated its output, producing fabricated videos designed to impersonate intelligence agencies and news outlets.
CNN's analysis of March 12 highlighted US propaganda too. Official "boom boom" videos from Defense Secretary Pete Hegseth and DHS Secretary Kristi Noem presented a sanitized version of strikes while ignoring civilian casualties. The framing gap is enormous. Iran reports 1,348 civilian deaths (UN figures). US coverage focuses on military targets.
The Albis Perception Gap Index captured this divergence. The Iran war story scored a PGI of 7.78 on March 15, with the Middle East-US pair hitting 8.8 -- meaning the same events are being described in almost entirely different ways depending on where you read about them.
Pakistan also entered the deepfake arena. A fabricated video of Indian External Affairs Minister Jaishankar circulated widely during the crisis, flagged by India's government on March 14. Our tracker links this to Iranian sockpuppet networks whose activity correlates with Iranian internet blackouts.
The NRSC Deepfake: When the Call Comes From Inside the House
On March 11, the National Republican Senatorial Committee posted an AI-generated deepfake of James Talarico, the Democratic nominee for US Senate in Texas.
The video shows a hyper-realistic version of Talarico narrating what the NRSC calls his "extreme statements" on immigration, Christianity, and transgender policy. A UC Berkeley professor specializing in digital forensics analyzed the ad and told CNN the deepfake would "likely deceive most viewers."
It carried a tiny "AI-generated" watermark. Public Citizen's Robert Weissman described it as "more an admission of wrongdoing than an effort at transparency."
This matters because it wasn't made by a troll farm. It wasn't a foreign operation. It was produced by an official party committee of the United States Senate, posted to X from a verified account.
Twenty-six states now have laws regulating political deepfakes, the New York Times reported on March 13. Most require disclosure or bar distribution close to elections. Pennsylvania went further this week, making malicious deepfakes a criminal offense -- a first-degree misdemeanor or third-degree felony.
Maine is considering its own crackdown, with a narrowly scoped bill targeting synthetic media in campaign communications. But at the federal level, nothing exists. No law prevents a Senate committee from deepfaking its opponent.
Russia's Hungarian TikTok Operation
The Financial Times reported this week that Russia's Social Design Agency -- a Kremlin-linked, US-sanctioned consultancy -- drew up plans to flood Hungarian social media with pro-Orban content ahead of the country's April 2026 elections.
The plan, detailed by the Foundation for Defense of Democracies on March 12, targets TikTok specifically. Content would praise Prime Minister Viktor Orban as a defender of Hungarian sovereignty while depicting his rival Peter Magyar as an EU puppet.
Hungarian opposition leader Magyar accused Fidesz and Russia of preparing the campaign, telling Ukrainska Pravda on March 10 that operatives are working from the Russian embassy in Budapest.
This isn't speculative. The Social Design Agency has a documented track record. It ran operations targeting French, German, and US audiences in 2023-2024. The US sanctioned the agency in 2024. Now it's back, with a new target and the same playbook.
YouTube's Expansion: Too Little, One Day Late
On March 10, YouTube announced it was expanding its AI-powered likeness detection tool to politicians, government officials, and journalists. The tool, first rolled out in October 2025 for creators, lets public figures flag unauthorized AI-generated depictions of themselves for removal.
One day later, the NRSC posted its Talarico deepfake.
YouTube's timing underscores the gap between platform policy and reality. The tool is opt-in. Politicians must enroll. Detection happens after publication, not before. And the system is US-focused -- politicians in Hungary, Pakistan, or Iran don't have access.
Axios reported the expansion covers "a select group" of officials. NBC News clarified that YouTube will "reach out" to eligible figures who can then "decide if they want to enroll." The passive framing tells you everything about the enforcement model.
The EU's Content Labeling Race
Europe is trying a different approach. The EU AI Act's Article 50 requires mandatory labeling of AI-generated content starting August 2, 2026. Fines reach 6% of global revenue.
This week, the EU released its second draft Code of Practice for how that labeling should work. The system proposes two layers: secure metadata embedded in files, plus digital watermarking visible to users. Optional fingerprinting and verification would add additional detection capacity.
Feedback runs until March 30. A final version is expected by June.
The UK House of Commons Library published its own analysis on March 11, noting the EU approach requires AI providers to mark outputs in a machine-readable way. But the UK has no equivalent legislation planned.
In the US, the approach is fragmented. The Commerce Department was due to complete a review of state AI laws by March 11, per a Trump executive order seeking to preempt what the administration considers "burdensome" state regulations. A Ropes & Gray analysis from March 11 questioned whether federal preemption of state AI content laws would survive legal challenge.
Meanwhile, the Transparency Coalition logged over a dozen new state-level AI content bills introduced in March alone.
The Bigger Picture: 90% Synthetic?
The European Parliament's research service estimated that up to 90% of online content could be synthetically generated by the end of 2026. That number appeared in our March 13 scan data.
If it's even half right, the implications are staggering. How do you run elections when most of what voters see online might be machine-made? How do you report news when the footage could be fabricated? How do you build trust when the default assumption becomes "probably fake"?
This week offered a preview. A deepfake from a Senate committee. AI war footage with millions of views. A Kremlin-linked agency planning TikTok floods. And platforms scrambling to build detection tools that arrive a day after the damage is done.
What We're Tracking
Active campaigns this week:- Operation Overload (Russia-aligned): Escalating during Iran war. Fabricated videos impersonating intelligence agencies and news outlets. High sophistication.
- Iranian sockpuppet networks: IRGC-linked accounts seeding pro-Tehran content across X, Instagram, and Bluesky. Activity correlates with Iranian internet infrastructure.
- Russia Social Design Agency: Covert TikTok operation targeting Hungary's April elections. Operating from Russian embassy in Budapest per opposition claims.
- NRSC political deepfakes (US domestic): Official party committee producing unlabeled AI-generated attack ads. Hyper-realistic quality per forensic analysis.
- Metric Media "pink slime" network: Ongoing synthetic local news sites targeting 2026 midterm voters.
- Iran war AI content: Mix of state-sponsored and opportunistic fake combat footage, old conflict recycling, and video game clips presented as real.
- YouTube expanded deepfake detection to politicians (March 10)
- X reported wave of account bans for "inauthentic behaviors" (March 12)
- Meta partnered with FBI and Thai Police to disable 150,000+ scam accounts
- EU released second draft AI content labeling Code of Practice (March 12)
- Pennsylvania criminalized malicious deepfakes
- Maine considering narrowly scoped deepfake campaign bill
- EU AI Act Article 50 labeling rules finalized for August 2 enforcement
- US Commerce Department AI law review due (preemption of state laws)
- 26 US states now have political deepfake laws
The Week Ahead
Hungary's elections approach in April. Expect the Social Design Agency operation to intensify. The NRSC deepfake has set a precedent -- watch for both parties to deploy synthetic media in midterm races. Iran war disinformation will continue scaling as the conflict enters its third week.
The EU's feedback period on AI content labeling closes March 30. That code will shape how synthetic content is marked across every major platform operating in Europe.
And somewhere, right now, someone is generating the next fake video that millions of people will believe is real.
The question isn't whether it's happening. It's whether anyone can keep up.
The State of Information is published weekly by Albis. We track information warfare campaigns, synthetic media, and perception gaps worldwide. Sources cited include CNN, the New York Times, Associated Press, Financial Times, Axios, TechCrunch, NBC News, Common Dreams, Washington Examiner, Ukrainska Pravda, Foundation for Defense of Democracies, EUvsDisinfo, House of Commons Library, and the Transparency Coalition.
Sources for this article are being documented. Albis is building transparent source tracking for every story.
Keep Reading
Iran Is Flooding Social Media With AI War Propaganda. So Is Everyone Else.
Fake Iran war videos racked up tens of millions of views in two weeks. AI-generated propaganda from all sides is making this the first conflict where truth is genuinely impossible to find.
Deepfakes Now Take 27 Seconds to Make
Deepfakes can now be created in 27 seconds. Voice clones, face swaps, full video fakes — all faster than brewing coffee. The tools are free. The barrier is gone.
Deepfakes Just Broke Identity Verification
Deepfakes can now fool the systems banks and apps use to verify you're real. The tools built to catch fake videos are failing.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.