Gen Z Is the First Generation Less Cognitively Capable Than Their Parents. EdTech Did That.
US schools spent $30B on edtech. Gen Z's test scores dropped. Now AI tutoring is flooding classrooms — with less than 10% of tools having evidence they work.

Gen Z can't outperform their parents on standardized tests. First time that's happened in modern history. The technology that was supposed to fix education may have caused the damage. Now AI tutoring is flooding the same broken system — and the question isn't whether it works. It's whether anyone's bothering to check.
The $30 Billion Experiment Nobody Measured
Neuroscientist Jared Cooney Horvath noticed something strange in Utah's test data. For years, fourth- and eighth-grade reading and math scores had been climbing. Then around 2014, they started falling. The inflection point lined up almost perfectly with when the state required every school to have digital infrastructure for computer-adaptive testing.
"Before 2014, computers were in schools, they were just peripheral," Horvath told Fortune. "After 2014, every school had to have digital infrastructure."
Utah wasn't a fluke. Horvath, who testified before the US Senate earlier this year, found the same pattern globally. Data from the Program for International Student Assessment (PISA) — which tests 15-year-olds worldwide — showed a clear correlation: more time on classroom computers, worse scores. Not just a dip. A reversal.
The US poured roughly $30 billion into edtech. Grade schoolers now spend an average of 98 minutes per day on school-issued devices and use 42 different edtech tools per year. The industry is worth hundreds of billions globally. And less than 10% of those tools have verified proof that they actually improve learning, according to Instructure's 2026 evidence report.
"This isn't a debate about rejecting technology," Horvath told the Senate. "It's a question of aligning educational tools with how human learning actually works."
The Reading Crisis That Won't Budge
The damage is showing up where it matters most: kids can't read.
A third of American fourth graders can't read above the basic level, according to the National Assessment of Educational Progress. Senator Andy Kim introduced a bill this week specifically to combat the crisis, calling reading "the foundation of life-long success."
Worse: a new NWEA report found that first and second graders — kids born during the pandemic who missed the worst school disruptions — still score below pre-pandemic levels in reading. Math has slowly recovered. Reading hasn't moved.
The Economist asked in January whether edtech is "mostly useless." The responses filled a letters page. Schools across the world had invested billions into digital platforms, adaptive learning software, and classroom hardware. The learning gains? Modest at best.
Enter AI — Again
Into this vacuum, AI tutoring has arrived with the same promise edtech made 20 years ago: personalized learning at scale.
The numbers are staggering. According to the Christian Science Monitor, 84% of American students now use AI in school assignments — up from effectively zero three years ago. Only 13% of schools encourage it across all classes. One in five schools have no AI policies at all.
The edtech industry sees a gold mine. AI tutoring companies are racing to fill the gap left by pandemic learning loss and decades of flat reading scores. Same pitch as before: technology will personalize instruction, free up teachers, reach kids who'd otherwise fall behind.
The difference this time: there's actually some evidence it might work.
The First Real Evidence
Two randomized controlled trials — the gold standard in education research — tested AI tutoring against human tutors. The results were surprising.
In a Google/Eedi Labs study, 165 students aged 13-15 were randomly assigned to chat with a human tutor, an AI tutor called LearnLM, or receive static hints. Supervising tutors approved 76.4% of the AI's responses with little or no editing. Students who worked with the AI performed better on harder follow-up topics — a 66% success rate, versus 61% for human-tutored students and 56% for those receiving static hints.
A second study at Stanford tested an AI tool called Tutor CoPilot with 1,000 elementary students. The AI didn't replace tutors — it suggested responses that human tutors could choose, edit, or ignore.
These are promising results. They're also extremely narrow: small samples, short timeframes, focused on math. As Axios reported this month, OpenAI just unveiled a framework to track AI's impact on student learning because the current evidence base is so thin. "Limited studies show AI tutoring offers gains in short-term recall," Axios noted, "but there's little insight into the tech's lasting effects."
The Lesson Nobody Learned
The pattern is worth naming. In the early 2000s, Silicon Valley convinced schools their system was broken and computers could fix it. Schools bought the pitch. Twenty years later, test scores dropped, reading stalled, and a neuroscientist had to testify before the Senate to explain what went wrong.
Now AI companies are making the same pitch to the same broken system. The technology is better. The evidence is slightly more rigorous. But the fundamental problem hasn't changed: schools are adopting tools faster than anyone can prove they work.
Brookings analyst Rebecca Winthrop and author Jenny Anderson wrote in the Washington Post that children "need a complete understanding" of how AI works — not just prompt engineering skills. Students with lower AI literacy were more likely to use the tools to finish assignments. The kids who understood AI best used it least.
That finding is the whole story in miniature. The students with the most knowledge made the most careful choices. The ones without it became dependent.
What This Means
The question isn't whether AI can teach. Early evidence says it can, in narrow contexts. The question is whether the education system — the same one that spent $30 billion on edtech without measuring outcomes — has the discipline to demand proof before rolling out the next wave.
The OECD wants ethics and values taught alongside AI skills. Massachusetts just passed a law requiring evidence-based reading instruction. Steps in the right direction.
But 84% of students are already using AI. Adoption outpaced evidence years ago. The only question left: will this generation be the second in a row to pay for the system's failure to ask a basic question?
Does this actually work?
History says probably. The evidence says it doesn't have to.
Sources & Verification
Based on 5 sources from 1 region
- FortuneNorth America
- Boston GlobeNorth America
- FutureEd / GeorgetownNorth America
- AP News / NWEANorth America
- Christian Science MonitorNorth America
Keep Reading
The Best AI Tutor Refuses to Answer Your Question
A Wisconsin experiment found that an AI chatbot designed to ask questions instead of giving answers produced the highest exam scores — but only when paired with peer discussion.
95% of UK Students Use AI on Assignments. Their Professors Still Don't Know What to Do.
A major UK survey found 95% of undergrads use AI for assessed work, up from near-zero three years ago. Universities are scrambling to respond as students say AI is 'making us all lazy.'
China Banned Human Tutors. AI Replaced Them Overnight.
China's 2021 tutoring ban destroyed a $100B industry. Now parents are using DeepSeek and Doubao as free AI tutors while US schools debate banning chatbots.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.