AI Now Powers 27% of Foreign Disinformation Campaigns, EU Report Finds in 2026
The EU's latest threat assessment reveals AI-generated content appeared in 27% of foreign disinformation incidents in 2025. Here's how AI is reshaping information warfare globally.

More than one in four foreign disinformation incidents recorded by the EU last year involved AI-generated content. That's the headline finding from the European External Action Service's latest threat assessment, which tracked 540 cases of foreign information manipulation across roughly 10,500 social media channels and websites in 2025.
The number — 27% — marks a clear shift. AI-generated text, synthetic audio, and manipulated video aren't experimental tools anymore. They're standard equipment in information warfare.
What the EU Found
The EEAS report breaks down the problem in blunt terms. Of the incidents it could attribute, 29% traced back to Russia and 6% to China. The remaining 65% couldn't be linked to a specific actor.
"Russian and Chinese actors have fully implemented AI tools to speed up content production and increase meddling activities with fewer resources," the report stated.
That last phrase matters. AI doesn't just make disinformation more convincing. It makes it cheaper. A campaign that once required a room full of paid trolls can now run on a fraction of the budget.
Ukraine remains the primary target. The campaigns aim to weaken international support for Kyiv and undermine trust in its leadership. But the reach extends far beyond one country.
Elections as Prime Targets
Nearly half of all recorded incidents were tied to elections, protests, or international crises. In 2025, the EEAS tracked election-related campaigns targeting Germany, Poland, Romania, Moldova, and the Czech Republic.
The pattern is consistent. During moments of public uncertainty — elections, protests, wars — disinformation operations surge. AI tools amplify the speed and volume.
This isn't a theoretical risk. It's a documented one, with specific countries named and specific campaigns tracked across the European political landscape.
The Telegram Campaign in Estonia
The EU-wide data connects to specific, local operations. On March 21, Estonian Foreign Minister Margus Tsahkna called out a propaganda campaign on Telegram pushing for autonomy of Narva, Estonia's Russian-speaking majority territory.
"Narva is Estonia. Full stop. We see through attempts to divide us," Tsahkna wrote on X.
The campaign, reportedly orchestrated from Moscow, promoted a separatist statute for Narva and its surrounding region. It follows a familiar playbook: identify ethnic or linguistic divisions, amplify grievances through social media, and push narratives that fracture national unity.
Estonia's been here before. Baltic states have faced Russian-language information operations for years. But the Telegram campaign shows how these efforts continue to evolve — platform by platform, city by city.
Iran's War on Starlink
While some states produce disinformation, others fight to control the flow of information entirely. Iran's approach is simpler: seize the hardware.
As SpaceX's Starlink network passed 10,000 active satellites this week, Iran's intelligence services have been confiscating hundreds of Starlink terminals across the country. With conventional internet disrupted by the ongoing conflict, Starlink offered Iranians a way to bypass government censorship.
The government declared Starlink use illegal. Earlier this year, anti-government protesters used the satellite service to broadcast their activities abroad during internet blackouts. Estimates once put Iranian Starlink subscribers at 40,000 to 50,000. Most terminals are now believed to be inactive.
It's a stark illustration of the information warfare spectrum. On one end, sophisticated AI campaigns craft synthetic content to shape opinion. On the other, governments physically destroy communication infrastructure to prevent their citizens from seeing outside information.
The US: Policy Without Consensus
Meanwhile, the White House released its framework for national AI policy on March 20, proposing broad preemption of state AI laws. The framework asks Congress to require AI platforms to implement safeguards against child exploitation and self-harm, while also calling for streamlined data centre permitting and regulatory sandboxes.
What the framework doesn't address in detail is AI's role in disinformation. The focus is primarily economic — keeping the US competitive in AI development. Child safety gets a mention. Information integrity doesn't feature prominently.
The same week, Infowars announced it would shut down in mid-April, and The Daily Stormer said it was closing too. Two of the earliest large-scale online disinformation platforms are going dark. But as analysts have noted, the infrastructure they built — the audience habits, the conspiratorial frameworks, the business model — has already been absorbed into mainstream platforms.
What This Means
The EEAS numbers tell a straightforward story. AI has lowered the barrier to entry for information operations. States that once needed large budgets and dedicated teams can now run campaigns with a handful of operators and a set of generative tools.
At the same time, the targets are getting more specific. Not just broad national audiences, but individual cities like Narva. Not just social media feeds, but physical hardware like Starlink terminals.
The tools are different. The goals haven't changed. Divide populations, undermine trust in institutions, and control what people can see and say. The 27% figure from the EEAS isn't an anomaly. It's a baseline — and it's going up.
Sources & Verification
Based on 5 sources from 3 regions
- UNITED24 MediaEurope
- ANSAEurope
- Seoul Economic DailyAsia-Pacific
- Mississippi TodayNorth America
- Roll CallNorth America
Keep Reading
AI Deepfakes Are Running Both Sides of the Iran War
Iran war AI deepfakes hit 110+ confirmed fakes in two weeks. Both Tehran-linked and Israeli-backed networks are running operations. Here's how the mechanism works.
AI Wins at Spotting Fake Photos, Humans Win at Videos
A new University of Florida study reveals the detection split: AI crushes deepfake photos, but humans outperform machines at spotting fake videos.
When War Becomes Content: Information Warfare Goes Public
Governments are weaponizing information in plain sight, mixing real violence with video game footage, blocking domestic truth while projecting foreign lies, and using AI to create convincing fakes faster than detection systems can adapt.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.