54% of Teens Use AI for Homework. Their Parents, Teachers, and Schools All Have Different Rules.
Three major surveys reveal a generational split on AI in education — teens see innovation, parents see cheating, professors see a gym with a forklift.

More than half of American teenagers — 54% — now use AI chatbots for schoolwork, according to Pew Research Center's latest survey of 1,458 teens. Their parents think it's cheating. Their teachers can't agree on the rules. And their schools mostly don't have any.
Three studies landed the same week — from Pew, Common Sense Media, and Inside Higher Ed. Together they paint a picture of an education system where every generation plays by different rules on AI. Not different interpretations. Different rulebooks entirely.
The numbers tell a split-screen story
Start with the teens. 48% use AI to research topics, 43% to solve math problems, 35% to proofread writing. That's tutoring, not plagiarism. Only 10% use chatbots for "all or most" of their work.
Now ask their parents. Common Sense Media surveyed 1,244 and found 52% believe it's flat-out unethical to use AI on assignments. Just 32% think it should be encouraged.
The teens? Exactly reversed. 52% say it's innovative and should be encouraged. Only 34% call it unethical.
Same household. Opposite conclusions.
The forklift problem
Dan Cryer, an English professor at Johnson County Community College in Kansas, put it this way to NPR: using AI to write a college essay is "like bringing a forklift to the gym."
"If all we needed was the weights moved, then that would be great," he said. "But we need the muscles developed."
Cryer spent a sabbatical studying generative AI and came back convinced educators should use it as little as possible. His argument: education isn't about the product. It's about the process. Society doesn't need more college essays. It needs students who can build an argument, weigh sources, and think clearly.
"What we need is students to go through the process of writing research papers so they can become better thinkers," he said.
Not every professor agrees. Leslie Clement at Johnson C. Smith University — a historically Black institution in Charlotte, North Carolina — co-created a course called "African Diaspora and AI" that examines both the harms and possibilities of AI for Black communities. She lets students use AI for outlines, feedback, and comparing sources.
"We encourage them to use it because we know they're going to use it," Clement told NPR. "But to use it in a responsible way."
Two professors. Same technology. Opposite policies. Both with good reasons.
The cheating paradox
Here's where it gets strange. Pew found that 59% of teens believe AI-based cheating happens "fairly often" at their school. Among students already using chatbots, that number jumps to 76%.
But most of those same students don't think they're cheating. They're researching. Editing. Getting explanations for concepts they don't understand.
Social norms are contagious. When three out of four AI-using students say cheating is common around them, it normalizes the behavior — even for kids who started with good intentions.
Aysa Tarana, a recent University of Minnesota graduate, told NPR she started using ChatGPT for small tasks like topic suggestions. Then she stopped. "I was outsourcing my thinking," she said, "and that felt really weird."
More than half of college students who use AI told Inside Higher Ed they have mixed feelings — it helps, but "can also make them think less deeply." They're not naive about the tradeoff. They're making it anyway.
The equity angle nobody's discussing
Pew's data revealed something that's barely making headlines: 6 in 10 Black teenagers use AI for schoolwork, compared to about half of white teens. That's a higher adoption rate — but it raises uncomfortable questions about access and quality.
If AI becomes a crutch that replaces skill-building, higher usage could mean more harm to the students who need strong fundamentals most. If AI becomes a genuine learning accelerator, then unequal access — not unequal usage — becomes the crisis.
Right now, nobody knows which scenario is playing out. The research doesn't exist yet.
The policy vacuum
Most US public schools still don't have AI policies, per a 2025 Department of Education study. The rules depend on whichever teacher you get — meaning a kid might be encouraged to use ChatGPT in period three and punished for it in period four.
78% of educators say high school students now get AI literacy lessons. Progress. But there's a gap between "here's what AI is" and "here's when you can and can't use it."
At the state level, legislatures are scrambling. FutureEd's tracker shows a wave of 2026 bills — from strict bans on automated decision-making to data protection rules to AI literacy mandates. No two states agree.
What this actually means
The generational AI divide isn't really about technology. It's about what education is for.
If school is about producing correct answers, then AI is the greatest tool ever invented. If school is about building the mental muscles to produce those answers yourself, then AI is Cryer's forklift — impressive, powerful, and completely missing the point.
Both things are partially true. That's what makes this hard.
The teens aren't wrong that AI is here to stay and they'll need to use it professionally. The parents aren't wrong that a 14-year-old who never learns to write an argument from scratch has lost something irreplaceable. The professors aren't wrong on either side.
What's missing is agreement. Not on whether AI belongs in education — that debate is already over, because 54% of teens settled it by opening ChatGPT — but on the rules of engagement. When is it a tutor? When is it a shortcut? When does assistance become dependence?
No universal answers exist. But the questions need asking — in every classroom, at every dinner table, and in every school policy that doesn't exist yet. Because right now, 54% of teenagers are writing the rules themselves.
Sources & Verification
Based on 5 sources from 1 region
- Education WeekNorth America
- Pew Research Center (via MyHostNews)North America
- NPRNorth America
- Inside Higher Ed / Generation LabNorth America
- Dallas Weekly / Pew ResearchNorth America
Keep Reading
China Banned Human Tutors. AI Replaced Them Overnight.
China's 2021 tutoring ban destroyed a $100B industry. Now parents are using DeepSeek and Doubao as free AI tutors while US schools debate banning chatbots.
Google Wants to Train 6 Million Teachers on AI. It Can't Find Enough Humans to Fill the Classrooms First.
Google's landmark AI literacy program targets every US educator. But 411,500 teaching positions are unfilled or understaffed — and the shortage is global.
The Best AI Tutor Refuses to Answer Your Question
A Wisconsin experiment found that an AI chatbot designed to ask questions instead of giving answers produced the highest exam scores — but only when paired with peer discussion.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.