Half of Students Say They Use AI Too Much — and Can't Stop
A 7,000-student Harvard survey reveals teens know AI is undermining their learning. 40% tried to cut back and failed.
In a survey of 7,000 high school students, nearly half said they rely on AI too much for their learning. Over 40 percent tried to cut back — and couldn't.
That's not researchers sounding the alarm from the outside. That's teenagers admitting, unprompted, that something is off.
The findings come from Harvard's Graduate School of Education, shared this week alongside a wave of new research all pointing the same direction: AI is changing how students think, and not always for the better.
The Cognitive Offloading Problem
Researchers call it "cognitive offloading" — when you hand a mental task to AI, your brain stops engaging the same way.
A study this month tracked 52 professional programmers learning a new coding skill. Half used AI. Half didn't. The AI group scored 17 percent lower on comprehension — and wasn't any faster at finishing the task. Beginners, intermediates, and experts all showed the same drop.
The AI's answers were fine. But the programmers who used it didn't learn as much, because their brains weren't doing the work.
It's like GPS navigation. You reach the destination, but ask you to draw the route from memory and you're lost. The task got done. The learning didn't.
A Brookings Institution report, drawing on 400+ research papers across 50 countries, put it bluntly: AI's risks to students currently overshadow its benefits. Easy grades plus our natural love of shortcuts are "atrophying students' learning — particularly their mastery of foundational knowledge and critical thinking."
The OECD's Digital Education Outlook 2026 warned of "metacognitive laziness" — students stop thinking about their own thinking. Remove the AI crutch during an exam, and they crumble.
The Self-Regulation Gap
What makes the Harvard data striking is the self-awareness. These students aren't oblivious. They know AI is doing too much. They just can't stop.
Harvard's Ying Xu identified self-regulation as the missing skill. Students need a plan: "I'll do the thinking myself and only use AI for scaffolding." But resisting the temptation — when AI is fast, fluent, and gets you an A — requires discipline most teenagers (and many adults) haven't built.
The same survey asked whether learning math and English still feels important now that AI exists. Motivation dropped sharply for both.
That's deeper than cheating on homework. It's a shift in how young people see the purpose of learning itself.
The Paradox: Ban It or Embrace It?
Here's where it gets complicated. The answer isn't "keep AI away from students."
Michael Brenner, a Harvard applied math professor who doubles as a Google research scientist, put it bluntly: anyone who ignores AI in their learning and career will fall behind. The tools are too powerful.
But his response was clever. When ChatGPT solved his entire graduate problem set, he didn't ban it. He flipped the assignment: invent problems AI can't solve — and prove the solutions are correct.
By semester's end, 60 students had created 600 original problems no AI model could handle. They published a paper together. Brenner said they knew more than any class he'd ever taught — because they had to push past what AI could do.
Cognitive scientist Tina Grotzer saw something similar. Traditional AI-assisted assignments came back as "60 pages of glop." But when students used AI to quiz themselves, generate different perspectives, or critique specific arguments — quality jumped.
Same pattern in both cases: AI as a partner in harder work, not a replacement for thinking.
What's Actually Working
Across the research, a few approaches are showing real promise.
Ask harder questions. When AI handles the easy stuff, classrooms need problems requiring genuine human thinking — creativity, judgment, messy real-world synthesis. Make the thinking visible. Oral exams, explain-your-reasoning assignments, and peer discussions force students to demonstrate understanding, not just produce answers. Hard to fake comprehension at a blackboard. Teach smart AI use. The coding study found programmers who asked AI to generate code and explain it outperformed those who just copied output. The explanation step kept their brains engaged. Start with foundations. Students need basic skills before AI enters the picture. You need to understand the chain rule before you can use AI to push past it.The Bigger Question
What the Harvard survey really reveals is a crisis of purpose.
If the point of school is to get good grades with minimal effort, AI has already won that game. Students know it. They're using it. And they can feel the hollowness of completing tasks without understanding them.
But if the point of learning is to develop a mind that can think, adapt, create, and handle problems nobody's seen before — then AI becomes a tool that extends what you can do, not a shortcut that does it for you.
We've been here before. Calculators didn't kill math. Wikipedia didn't kill research. Both forced education to stop testing what machines could do and start valuing what only humans could.
AI is asking the same question, at a much larger scale: What are schools actually for?
Over 40 percent of students tried to fix the problem themselves and failed. That's not a willpower failure. That's a design failure — in the tools, the classrooms, and how we think about learning itself.
The encouraging part? Some educators are already cracking it. Classrooms that lean into harder work, more creativity, and genuine understanding aren't just surviving the AI era — they're producing students who know more than ever.
The race isn't between students and AI. It's between two versions of education: one that lets AI do the thinking, and one that uses AI to think harder.
Keep Reading
Fifteen Minutes a Day Just Solved a Reading Crisis. Then the Money Disappeared.
Johns Hopkins research shows 15 min/day virtual tutoring took first graders from 6% to 48% reading proficiency. But ESSER funding just expired.
One Hacker, One AI, 150 Gigabytes of Government Data
A lone hacker jailbroke Claude AI and stole 195 million Mexican taxpayer records in six weeks. This is the first confirmed case of AI being weaponized to breach a government — and it won't be the last.
Jailbroken AI Just Hacked a Government
Hackers used a jailbroken Claude AI to breach the Mexican government. First confirmed case of AI weaponized for cyberattacks. The tools we built to help are being turned against us.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.