AI Fake People Now Get More Followers Than Real Ones
A fabricated soldier hit 1M Instagram followers in 3 months. Platforms label just 14% of AI content. Here's why fake people are winning the attention war.

A blonde woman in military fatigues accumulated over one million Instagram followers in three months. She posed on bunk beds, walked tarmacs in high heels beside Donald Trump, and drove thousands of fans to her OnlyFans page for foot photos. She didn't exist. "Jessica Foster" was entirely AI-generated — every photo, every pose, every pixel — and nobody behind the screen ever replied to a single comment.
Instagram removed the account in late March 2026. But the damage — or the lesson, depending on how you look at it — was already locked in. A fake person, built in minutes, outperformed the vast majority of real humans competing for attention on the same platform.
And the systems designed to catch her? They barely tried.
The Fabricated Person Economy
Foster wasn't a one-off glitch. She's the most visible example of a growing category of AI synthetic personas being deployed across social media — fabricated humans designed to build audiences, push political messages, and generate revenue.
The same week Foster's account was finally removed, the BBC documented a wave of AI-generated videos featuring fake female Iranian soldiers saying "Habibi, come to Iran" — propaganda clips designed to humanise one side of the ongoing war. One giveaway: Iran prohibits women from serving in combat roles. But the videos circulated widely before anyone checked.
On TikTok, an AI-generated female police officer with over 26,000 followers posts pro-Trump content. A video featuring her smiling alongside text about mass deportations drew hundreds of comments from users treating the interaction as real.
These aren't crude experiments. They're a business model. "A lot of the AI generation is basically to get clicks and money or to drive people to a more lucrative place," said Sam Gregory, executive director of Witness, an organisation combating deceptive AI.
Foster's operation was elegant in its cynicism: use patriotic military imagery and MAGA aesthetics to build a following, then monetise that audience through an OnlyFans foot-fetish page. Politics as a funnel for pornography. The creator behind the screen never had to show their face, answer a question, or exist.
The Platforms Promised to Fix This
Five major platforms — Instagram, LinkedIn, Pinterest, TikTok, and YouTube — have committed to labelling AI-generated content. The technology exists. The Coalition for Content Provenance and Authenticity (C2PA) has developed cryptographic metadata standards that can track whether an image was AI-generated from creation to distribution.
So how well is it working?
A March 2026 audit by The Indicator tested the system. The investigator created over 200 AI-generated images and videos using Google, Meta, and OpenAI tools, then posted them across all five platforms.
Instagram — the platform where Foster built her million-follower army — labelled just 14% of the AI content posted. TikTok managed about a third. YouTube caught roughly half. The best performers, LinkedIn and Pinterest, still missed a third of synthetic content.
"The results of this study are confirmation that voluntary commitments by the tech companies cannot be taken seriously," said David Evan Harris, who helped write California's AI Transparency Act.
The failure isn't random. Whether an image gets labelled depends on which tool created it, what device uploaded it, and which platform received it. The investigator described the process as feeling "like a slot machine."
Meanwhile, Meta — a C2PA steering committee member — doesn't consistently add Content Credentials metadata to images generated by its own AI tools. Its oversight board has warned that the company is "inconsistently implementing" its own standards.
Campaign Season Without Guardrails
The synthetic persona problem is colliding directly with the 2026 US midterm elections. At least 15 campaign ads featuring AI-generated content have run since November, according to NBC News, spanning school board races to governor's campaigns.
The most high-profile case: the National Republican Senatorial Committee released an AI-generated video of Texas Senate candidate James Talarico reading real tweets about race and transgender rights. The deepfake looks uncannily like Talarico. The words "AI GENERATED" appear in small text in the bottom-right corner for roughly three seconds.
In Massachusetts, a Republican gubernatorial candidate created an AI radio ad using Democratic Governor Maura Healey's voice — making her "say" things she never said. The campaign's stated policy: disclose AI use only if the depiction isn't "obvious to a reasonable viewer."
And in New York, Andrew Cuomo's mayoral campaign used AI to depict criminals supporting his opponent.
There is no federal law regulating AI in political messaging. Just 26 states have any legislation addressing political deepfakes, and most of those laws remain untested.
Purdue University's Governance and Responsible AI Lab has been tracking the explosion. Since January 2025, the lab catalogued more than 1,000 English-language social media posts featuring fake images or videos of political figures. In the previous eight years combined, they'd recorded 1,344 total.
"We are blending the lines between political cartoons and reality," said Daniel Schiff, the lab's co-director. "A lot of people feel like these images or videos, or the stories they convey, feel true."
The Uncomfortable Finding: Knowing It's Fake Doesn't Help
Here's where it gets unsettling. Researchers at Purdue and the Brookings Institution have found that political deepfakes remain persuasive even when viewers know they aren't real.
Foster walked tarmacs in high heels with a wrong military badge next to world leaders. None of it made sense under scrutiny. But scrutiny wasn't the point.
"People aren't necessarily looking for things that are real," Gregory said. "They are looking for things that represent their beliefs."
Valerie Wirtschafter, a Brookings fellow studying AI and emerging technology, describes deepfakes as "just another layer added on in terms of this process of reinforcing, rather than revisiting, what people believe is true."
This is the attention economy's dark punchline. Authenticity was supposed to be social media's currency. But fabricated people, optimised for engagement with no human constraints — no bad hair days, no off-brand opinions, no need for sleep — are better at playing the algorithm than real ones.
What Comes Next: The Swarm
Researchers worry that the Foster model is a prototype for something worse. A recent study published in Science warned about "AI swarms" — networks of synthetic personas capable of "coordinating autonomously, infiltrating communities, and fabricating consensus efficiently."
"It's sort of like a troll farm without actually having to have people any more," Wirtschafter said.
The old troll farms required human operators working in shifts. The new version needs someone to press a button. One creator could deploy hundreds of Jessica Fosters across dozens of platforms, each tailored to a different demographic, each funnelling attention toward a different goal — political, commercial, or both simultaneously.
The technology to track AI-generated content exists. The C2PA standard works. Several platforms have adopted it. But as the Indicator audit demonstrates, adoption and enforcement are different things entirely. Instagram labels 14% of AI content while hosting million-follower fake accounts for months. The infrastructure exists to solve this problem. The will doesn't.
The Albis Perception Gap Index scored the Israel strikes on a marked press car killing three journalists at PGI 6.98 this week — nearly the maximum divergence score — with the US-Middle East perception gap reaching 8.5 out of 10. When the information space is already this fractured, adding millions of synthetic personas into the mix isn't just a content moderation challenge. It's a structural threat to shared reality.
We've entered a period where fake people can build larger audiences faster than real ones, where political campaigns deploy fabricated opponents with near-impunity, and where the platforms tasked with flagging synthetic content catch less than a sixth of what passes through.
The question isn't whether AI-generated personas will shape the 2026 midterms. They already are. The question is whether anyone will be able to tell.
Sources & Verification
Based on 5 sources from 0 regions
Keep Reading
French Prosecutors Allege Musk Used Grok Deepfakes
Paris prosecutors suspect Elon Musk deliberately encouraged Grok's explicit deepfake controversy to boost X and xAI's value before the $1.25 trillion SpaceX-xAI IPO.
Australia Banned Kids From Social Media. It Failed.
Two months into the world's first under-16 social media ban, 90% of teens say they never lost access. Six countries are copying the homework anyway.
TikTok ADHD Content: 52% Is Wrong, Study Finds
A University of East Anglia study found 52% of TikTok ADHD videos are inaccurate — while whistleblowers reveal the platform knew outrage drives engagement.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.
Free · Daily · Unsubscribe anytime
🔒 We never share your email