TikTok Mental Health Misinformation 2026: 52% of ADHD Content Is Wrong, Study Finds
A University of East Anglia study found 52% of TikTok ADHD videos are inaccurate — while whistleblowers reveal the platform knew outrage drives engagement.

More than half of TikTok's ADHD videos are medically inaccurate. That's not an activist's estimate or a concerned parent's guess — it's the finding of a systematic review from the University of East Anglia, published this week in The Journal of Social Media Research, covering 5,057 social media posts across five platforms.
The number is 52%. For autism content, it's 41%. And the gap between TikTok and every other platform isn't even close.
The Numbers That Should Alarm You
The UEA team, led by Dr Eleanor Chatburn and Dr Alice Carter, examined 27 studies assessing mental health information on YouTube, TikTok, Facebook, Instagram, and X. The conditions covered ranged from ADHD and autism to schizophrenia, bipolar disorder, depression, eating disorders, OCD, and phobias.
Misinformation rates varied wildly — from 0% for anxiety and depression videos on YouTube Kids to 56.9% for certain mental health topics on YouTube's main platform. But one pattern held constant: TikTok was worse. Across every mental health category, TikTok's misinformation prevalence exceeded every other platform.
YouTube averaged 22% misinformation. Facebook averaged just under 15%. TikTok averaged above 50% for its two most-searched mental health topics.
The most striking finding wasn't the overall number. It was the professional gap. On TikTok, just 3% of ADHD videos posted by healthcare professionals contained misinformation. For non-professionals, that number was 55%. The platform isn't short of accurate content. The algorithm simply doesn't surface it.
The Algorithm Isn't Broken. It's Working Exactly as Designed.
This is where the UEA study collides with a separate, far more damning revelation. One week before the study was published, the BBC aired "Inside the Rage Machine," a documentary built on testimony from more than a dozen whistleblowers at Meta and TikTok.
Their core allegation: both companies deliberately weakened content moderation to chase engagement.
At TikTok, former employees described internal moderation dashboards where a political figure being mocked received higher priority than a 16-year-old in Iraq reporting sexualised images of herself. That wasn't a bug, they said. It was policy.
At Meta, competitive panic over TikTok's rise drove decisions that directly eroded safety. The company assigned 700 staff to grow Reels while refusing just two specialist child-protection roles. Internal research found that Reels comments had 75% higher prevalence of bullying and 19% higher hate speech compared to the main Instagram feed. Meta had the data. It kept pushing.
The connection to mental health misinformation is direct. TikTok's algorithm measures watch time, not accuracy. A 60-second video claiming that forgetting your keys means you have ADHD will hold attention longer than a clinician carefully explaining diagnostic criteria. The algorithm doesn't know the difference. It doesn't need to. Engagement is engagement.
Dr Carter put it plainly: "TikTok's algorithms are designed to push rapidly engaging content and this is a major driver of misinformation. Once users show interest in a topic, they are bombarded with similar posts — creating powerful echo chambers that can reinforce false or exaggerated claims. It is a perfect storm for misinformation to go viral faster than facts can catch up."
The Real-World Damage
This isn't abstract. Clinicians across the UK, US, and Australia have reported a surge in patients arriving with self-diagnoses drawn from TikTok. A separate 2025 study in JMIR Infodemiology found that a high percentage of TikTok users who watched ADHD content self-identified with the symptoms shown — symptoms that were frequently inaccurate or taken out of clinical context.
The harm runs in both directions. People who don't have ADHD are seeking medication they don't need, clogging already-strained diagnostic services. People who do have ADHD or autism are getting their understanding of their own condition from videos that are wrong more often than they're right.
Dr Chatburn warned that misinformation "can also make mental illness seem scary or hopeless, which creates even more fear and misunderstanding. When people come across misleading advice about treatments, especially ones that aren't backed by evidence, it can delay them from getting proper care."
The National Autistic Society's Judith Brown called it a demonstration of "how rapidly misinformation can spread on social media."
TikTok called the study "flawed" and said it relies on "outdated research about multiple platforms."
The Regulatory Vacuum
Here's the part that ties it together. On March 20 — the same day the UEA study made headlines — the White House released its National AI Legislative Framework. The document talks about protecting children and fostering innovation. Its core mechanism: blocking states from regulating AI and technology companies.
At least 1,561 AI-related bills have been introduced across US states. The White House framework would pre-empt them. It explicitly opposes "open-ended liability" for AI firms and pushes for what legal analysts describe as a "light-touch regulatory approach."
In Europe, the European Commission has opened proceedings against both Meta and TikTok for breaching Digital Services Act transparency rules. In the US, the federal government is moving in the opposite direction — asking Congress to ensure that no state can impose its own rules on how these platforms operate.
The timing is almost too neat. In the same week, we learned that TikTok's mental health content is wrong more often than it's right, that the company's own employees say it deliberately weakened safety for engagement, and that the US government wants to prevent states from doing anything about it.
What Actually Works
YouTube Kids was the only platform in the UEA study to score 0% misinformation on some mental health topics. The researchers attributed this to "stricter content moderation and prioritisation of child-friendly content." In other words, when platforms choose to moderate, they can. They just don't choose to, because moderation costs engagement.
The study's conclusion is a call for clinicians to become content creators — to compete for attention in the same spaces where misinformation thrives. It's a reasonable suggestion. It's also an admission that we've built information systems where doctors have to out-perform influencers to get accurate health information in front of young people.
One in seven teenagers globally experiences a mental health disorder, according to the WHO. Social media is increasingly where they go to understand what's happening to them. And the platforms they trust most are the ones that get it wrong most often.
The question isn't whether TikTok's algorithm amplifies misinformation. The UEA data settled that. The question is whether anyone with the power to change it has the incentive to try.
Right now, the whistleblowers say no. The regulators say maybe, if you're in Europe. And the White House says states shouldn't even be allowed to ask.
Sources & Verification
Based on 5 sources from 0 regions
Keep Reading
A 20-Year-Old Is About to Tell a Jury What Instagram Did to Her Brain
The first plaintiff in the landmark social media addiction trial takes the stand. Here's why this case matters for everyone.
Australia Banned Kids From Social Media. The Kids Are Winning.
Two months into the world's first under-16 social media ban, 90% of teens say they never lost access. Six countries are copying the homework anyway.
Social Media Is Dying. But Not From Addiction — From Boredom.
Usage peaked in 2022. Posting collapsed. The EU just ordered TikTok to kill infinite scrolling. But the users are already leaving on their own.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.