Meta Found Liable for Harming Children Twice in 48 Hours
Two juries in two states ruled Meta designed products that hurt kids. $381M in damages so far, 2,000 lawsuits waiting, and insurers won't cover it.

A jury in Santa Fe found Meta guilty of enabling child exploitation on Tuesday. A jury in Los Angeles found Meta guilty of designing addictive products that harmed a young woman on Wednesday. Two different courts, two different legal theories, the same conclusion: Meta built platforms that hurt children and knew it was happening.
The combined damages so far total $381 million. That number is about to get much larger.
Two Verdicts, Two Theories, One Pattern
The New Mexico case was about predators. State Attorney General Raúl Torrez argued that Meta turned Instagram into what he called a "breeding ground" for child sexual exploitation. The jury agreed, finding Meta willfully violated New Mexico's consumer protection laws. They imposed the maximum penalty — $5,000 per violation — totalling $375 million. They deliberated for less than a day.
The Los Angeles case was about design. A 20-year-old woman known as KGM testified that she became hooked on YouTube at age six and Instagram at nine. By ten, she was depressed and self-harming. Her lawyers argued that features like infinite scroll, autoplay, and algorithmic recommendation weren't bugs — they were the product working exactly as intended. "How do you make a child never put down the phone?" her lawyer Mark Lanier asked the jury. "That's called the engineering of addiction."
The LA jury took nine days to deliberate. When they came back, the vote was 10-2 on every single question — all in KGM's favour. They awarded $6 million total, with Meta paying 70% and YouTube the rest.
Both companies say they'll appeal. But the legal barrier that protected them for nearly three decades just cracked in two places at once.
The Insurance Problem Nobody's Talking About
Here's the detail that should worry Meta's shareholders more than either verdict: a Delaware court recently ruled that Meta's insurers are not responsible for damages from lawsuits alleging their platforms harm children.
That ruling means every dollar from every future verdict comes directly from Meta's balance sheet. For a company facing roughly 2,000 pending lawsuits — from families, individuals, and over 250 school districts across the United States — the financial exposure is staggering.
The parallel to Big Tobacco isn't just rhetorical anymore. When tobacco companies lost their first major lawsuits in the 1990s, the initial payouts looked manageable. The industry eventually paid over $200 billion in settlements. Meta's current verdicts are the first drops. The flood is the 2,000 cases behind them, each one now armed with a jury precedent.
Section 230's Shrinking Shield
For decades, social media companies have hidden behind Section 230 of the Communications Decency Act — a 1996 law that shields platforms from liability for content posted by users. The law was written when the internet looked like message boards and email chains. It was never designed for algorithmic amplification engines that decide what billions of people see every hour.
These lawsuits sidestep Section 230 entirely. They don't argue that Meta is responsible for what users post. They argue Meta is responsible for how it designed the machine that serves that content. Infinite scroll. Autoplay. Algorithmic recommendations that learn what a child responds to and feed them more of it, faster, with no off switch.
"All media tries to keep people on and coming back," argued Erwin Chemerinsky, dean of UC Berkeley's School of Law, suggesting the verdict went too far. Others counter that social media's algorithmic ability to capture, cultivate, and control attention makes it fundamentally different from books, films, or video games. A Marvel movie ends after two hours. Instagram's feed never does.
The UK Is Watching
The timing is striking. On the same day the LA jury returned its verdict, the UK government announced it would pilot social media bans, digital curfews, and time limits on 300 teenagers across all four nations.
The six-week pilot will test three approaches: complete social media removal for some teens, overnight access blocks for others, and one-hour daily caps on Instagram, TikTok, and Snapchat for a third group. A control group with no restrictions will provide the comparison. Nearly 30,000 parents and children have already responded to the government's digital wellbeing consultation.
Separately, the Wellcome Trust is funding what will become the world's first major scientific trial of the impact of reducing social media use among adolescents — a study of 4,000 students aged 12-15 in Bradford, led by Cambridge psychologist Amy Orben.
This matters because Australia's social media ban for under-16s, which launched in December, hasn't worked. Ninety percent of teenagers reported never losing access. Kids drew fake moustaches to fool facial recognition. A pet dog passed the age verification check. The UK appears to be learning from Australia's failure by testing what actually works before legislating.
The Attention Economy's Reckoning
What's happening this week isn't just legal news. It's the attention economy hitting a wall.
For fifteen years, the business model has been simple: capture attention, sell it to advertisers, optimise for engagement above all else. Internal documents from Meta — many revealed through whistleblower Frances Haugen's disclosures in 2021 — showed the company knew Instagram was harmful to teenage girls and chose growth over safety. Those documents are now evidence in courtrooms across America.
The $6 million verdict in Los Angeles is small by Big Tech standards. Meta's quarterly revenue exceeds $40 billion. But bellwether verdicts aren't about the dollar amount. They're about the precedent. KGM's case was selected from thousands precisely because its outcome would signal how juries respond to the core argument: that these products were designed to addict.
The jury responded clearly. Ten of twelve jurors agreed on every count.
What Happens Next
The first federal trial is set for June in San Francisco. Hundreds more cases are consolidated in the federal system. State attorneys general from across the country are watching. If the pattern holds — if juries keep finding Meta liable on both exploitation and addiction theories — the company faces a legal landscape that makes the $381 million in current verdicts look like a down payment.
Meanwhile, the UK's pilot will run through spring, France plans to implement its under-15 social media ban by September, and MPs in Westminster are now investigating the neuroscience of how screens affect developing brains — specifically how dopamine release during device use shapes brain architecture in adolescents.
The companies have spent years insisting they take child safety seriously. Two juries just said that isn't true. Two thousand more families are waiting to make the same case.
The question is no longer whether social media harms children. Juries have answered that. The question now is what it costs.
Sources & Verification
Based on 5 sources from 0 regions
Keep Reading
TikTok ADHD Content: 52% Is Wrong, Study Finds
A University of East Anglia study found 52% of TikTok ADHD videos are inaccurate — while whistleblowers reveal the platform knew outrage drives engagement.
40% of Kids' YouTube Shorts Are AI Slop
A NYT investigation found YouTube's algorithm floods children with bizarre, nonsensical AI videos. Experts say it could rewire how young brains learn.
Australia Banned Kids From Social Media. It Failed.
Two months into the world's first under-16 social media ban, 90% of teens say they never lost access. Six countries are copying the homework anyway.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.
Free · Daily · Unsubscribe anytime
🔒 We never share your email