The EU Makes Deepfakes Illegal in August. The US Midterms Are in November.
The EU's Article 50 deepfake labelling law takes effect August 2, 2026 — three months before US midterms with no federal law. The same AI tools, two opposite rules.

On August 2, 2026, the European Union's deepfake labelling law kicks in. Every AI-generated video, image, or audio clip must carry a machine-readable tag marking it as synthetic. Companies that skip that label face fines up to €35 million — or 7% of their global revenue, whichever is larger. Ninety-two days later, Americans vote in the midterm elections.
That gap isn't a coincidence. It's the story.
Two democracies, one technology, opposite rules
The EU's Article 50 comes from the AI Act passed in 2024. The transparency obligations were always scheduled for August 2026 — the law gave companies two years to build compliance systems. Deployers of AI must disclose when synthetic content is artificially generated. That applies to everyone serving EU markets: Meta, Google, TikTok, OpenAI, the lot.
The US has no equivalent. Federal law contains no comprehensive statute specifically regulating AI-generated political deepfakes. What it has instead is a patchwork of older laws — fraud, election interference, identity theft, defamation — all written before generative AI existed.
Twenty-six states have passed some form of deepfake election law. Most require disclosure or ban deceptive synthetic media within a set window before election day. But they contradict each other. Some cover only video. Some exempt satire. Some only apply in the 60 days before an election. None are federal. None cover the primaries.
The other 24 states have nothing.
The war tested this first
Before the midterms, the Iran conflict ran the experiment.
Iran's Islamic Revolutionary Guards Corps deployed deepfakes within hours of its opening strikes on February 28 — including videos falsely claiming to have killed Israeli PM Benjamin Netanyahu. Within seconds, a follow-up message purporting to be Israeli authorities directed people to a malicious app. The combination of a real strike, a disinformation push, and a cyberattack was coordinated in a way analysts hadn't seen at scale before.
Pakistan joined in. When regional tensions escalated, Pakistan-linked accounts circulated AI-generated videos of India's Chief of Naval Staff, Admiral Dinesh K. Tripathi, and External Affairs Minister S. Jaishankar — both flagged as deepfakes by India's PIB Fact Check unit. India's Army Chief General Upendra Dwivedi got the same treatment.
What defence analysts are calling the "cognitive warfare laboratory" isn't metaphorical. Analysts at the Foreign Affairs Forum assessed that AI-generated content now constitutes a larger share of the disinformation ecosystem in the Iran conflict than content produced through traditional manipulation methods. That's a qualitative threshold crossed — not just more of the same.
The same companies, the same tools, the same underlying models used in that laboratory will be serving US political ads in October.
The first confirmed midterm deepfake
It's already happening.
On March 11, 2026, the National Republican Senatorial Committee posted an attack ad featuring a deepfake of James Talarico, the Democratic nominee for the US Senate in Texas. Talarico called it illegal. His legal team is checking which state laws might apply. The NRSC has not taken it down.
Texas is one of the states with a deepfake disclosure law. It requires labelling. Whether that law applies to this ad — depending on timing and whether the content meets the statutory definition — is being debated by lawyers, not regulators.
That's the gap in practice: even where state laws exist, enforcement requires lawsuits, not regulators. The FEC hasn't issued rules on AI-generated campaign content. Congress hasn't passed a bill. The PRIMARY runoff is in late May.
The EU law starts August 2. The general election is November 4.
What Article 50 actually requires
The EU rule is specific. It covers AI systems generating synthetic audio, image, video, or text. Providers must ensure outputs are marked in a machine-readable format. The marking must be effective, interoperable, robust, and reliable. Deployers — the companies putting AI content out — must disclose when content is artificially generated or manipulated, unless it's obvious, artistic, or authorised for law enforcement.
There's an exception for satire. That exception is already being discussed as a loophole: campaigns could argue attack ads are "satirical" commentary on a candidate. EU regulators are aware of this. The Commission's Code of Practice, published in January 2026, doesn't resolve it cleanly.
Fines aren't triggered by Article 50 violations alone — Article 50 is a transparency obligation, not a prohibited practice. Fines for Article 50 violations run up to €15 million or 3% of global revenue. The 7% figure applies to the most serious AI Act violations, including banned practices under Article 5.
That still makes a Meta or Google deepfake noncompliance fine potentially worth hundreds of millions of dollars. In the US, the equivalent fine is zero. Federal law doesn't cover it.
The structural divergence
What's new here isn't that deepfakes are dangerous in elections. That's been true since 2019. What's new is the collision of timing.
The EU enforcement deadline lands three months before the US general election. The same AI tools — built by the same American companies — will face binding legal obligations on one side of the Atlantic and effectively none on the other. A political AI company operating in Berlin must label its content. The same company's US division can put the same video into a Texas attack ad with no federal requirement to disclose anything.
This creates a competitive asymmetry. EU-compliant AI tools are more expensive to deploy — labelling infrastructure costs money. US campaigns using the same tools face no equivalent cost. That's not just a policy gap. It's a structural advantage for synthetic political content in the one election the EU law doesn't cover.
Meanwhile, the Iran war has already shown what unlabelled AI content does to an information environment. Ninety million Iranians have been offline since February 28. The deepfake of Netanyahu's death circulated for hours before debunking caught up. Pakistan's deepfaked Indian military leaders reached audiences in multiple countries before flagging.
The 2026 midterms won't be a combat zone. But the same tools, at higher volume, aimed at 240 million eligible American voters, with no federal labelling rule — that's not a hypothetical scenario. It's what the calendar already shows.
The EU built a warning label system. The US will run the November election to find out if it needed one.
For more on how information shapes perception across borders, see the Albis Perception Gap Index and our AI and information warfare coverage.
Sources & Verification
Based on 5 sources from 4 regions
- European CommissionEurope
- CNN PoliticsNorth America
- Foreign Affairs ForumInternational
- Columbia Law Review (EU AI Act)International
- India TodayAsia-Pacific
Keep Reading
US Stands Alone at UN Gender Summit as SAVE Act and Femicide Data Emerge
The US was the only nation opposing the CSW70 outcome at the UN. Meanwhile the SAVE Act raises voter access concerns for women and Canada records 30 femicides in 80 days.
Sudan's War Crossed a Border. The First Casualty Was the Food Supply.
Chad closed its 1,300km Sudan border after RSF fighters crossed. It also shut the only aid route to 21 million people facing famine in Darfur.
China Installed More Solar in 2025 Than the US Has Ever Built
China added 315 GW of solar in 2025 — more than the entire US cumulative capacity of 279 GW. Here's why that gap is now a geopolitical weapon.
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.