95% of UK Students Use AI on Assignments. Their Professors Still Don't Know What to Do.
A major UK survey found 95% of undergrads use AI for assessed work, up from near-zero three years ago. Universities are scrambling to respond as students say AI is 'making us all lazy.'

Ninety-five percent of UK university students use AI for assessed work. Three years ago, that number was near zero. The shift happened so fast that most professors are still writing policies about tools their students mastered months ago.
That's the headline from the Student Generative AI Survey 2026, published this week by the Higher Education Policy Institute (HEPI). It surveyed 1,054 full-time UK undergrads and paints a system that's lost control of one of the biggest changes in how humans learn.
The Speed Is the Story
The numbers move fast. In 2024, 3% of students admitted submitting AI-generated text in assignments. By 2025: 8%. This year: 12%. That's just the ones admitting to wholesale AI submissions. The 95% figure includes everyone using AI for research, brainstorming, editing, or understanding course material.
"It is making us all lazy," one student told researchers. Another said, "I'm not using my brain at all." A third put it bluntly: "It encourages you to think less."
These aren't critics of AI in education. They're the users.
Universities Can't Keep Up
Only 37% of students say their university encourages AI use. Just 38% get AI tools from their institution. And 42% avoid AI not because they don't want it — but because they're afraid of being falsely accused of cheating.
Universities built a paradox: students use AI everywhere, professors can't detect it reliably, institutions haven't decided if it's allowed, and almost half of students are anxious about punishment for something nearly everyone does.
Sixty-five percent say assessments have changed substantially. But unevenly. Russell Group universities — the UK's top-tier research institutions — are more likely to encourage AI use. Students elsewhere feel abandoned.
"AI literacy and capability must be embedded across the curriculum," said Charlotte Armstrong, co-author of the HEPI report. "These skills can't be treated as optional."
It's Not Just the UK
Across the Atlantic, the picture looks similar — and younger. Pew Research Center surveyed 1,458 US teens and found that AI-assisted cheating has become, in their words, "a regular feature of student life." Thirty-four percent of teens say they cheat with AI "extremely" or "very often." Another 25% cheat "somewhat often."
That's 59% of American teenagers admitting to regular AI-assisted cheating before they even reach university.
The pipeline is clear. Students arrive at university having already spent years using AI to do their homework. By the time professors try to teach them when and how to use it responsibly, the habits are already baked in.
The Real Risk Isn't Cheating
A February analysis in Phys.org argued that the entire cheating debate misses the point. The real danger isn't students faking assignments. It's the quiet erosion of the cognitive skills those assignments were supposed to build.
Writing an essay isn't busywork. It forces you to organize scattered thoughts into a coherent argument. Struggling through a math proof builds instincts no answer key provides. Remove the struggle, remove the learning.
The HEPI survey backs this up. Nearly half of students (49%) say AI has improved their experience — mostly by saving time and providing instant answers. But 16% say it's made things worse, citing skill erosion, unfairness, and social isolation.
Fifteen percent of students now use AI for companionship, advice, or to address loneliness. The tool they brought in to help with essays is becoming a substitute for human connection.
The 68% Problem
Sixty-eight percent of UK undergraduates believe AI skills are essential to succeed in today's world. They're probably right. But fewer than half — 48% — say their professors are helping them develop those skills. Arts and humanities students feel the gap most acutely.
That's the core tension. Students know AI is their future. They're teaching themselves. And institutions charging tens of thousands in tuition can't keep pace with a technology that updates every few months.
Some universities are adapting. The UK government's AI Opportunities Action Plan positions Britain as a global leader in AI adoption. HEPI recommends universities provide AI induction sessions for all incoming students, specialized training for staff, and clear published guidance on ethical use.
But recommendations don't change the fact that right now, nearly every student is using a tool most professors weren't trained to teach with.
What Happens Next
The enrollment cliff is already hitting universities in 2026. Declining birth rates from the Great Recession era mean fewer students showing up this fall. At the same time, public confidence in higher education is dropping. Deloitte's 2026 higher education outlook warns that policymakers are questioning the sector's entire business model.
Into this comes AI — the biggest change in how students learn since the internet. Universities that figure out how to teach with it, not against it, will survive the cliff. The ones still debating whether to ban ChatGPT probably won't.
The students have already decided. Ninety-five percent of them.
Sources & Verification
Based on 5 sources from 3 regions
- HEPIEurope
- BritBriefEurope
- Pew Research CenterNorth America
- District AdministrationNorth America
- Phys.orgInternational
Keep Reading
Gen Z Is the First Generation Less Cognitively Capable Than Their Parents. EdTech Did That.
US schools spent $30B on edtech. Gen Z's test scores dropped. Now AI tutoring is flooding classrooms — with less than 10% of tools having evidence they work.
Google Wants to Train 6 Million Teachers on AI. It Can't Find Enough Humans to Fill the Classrooms First.
Google's landmark AI literacy program targets every US educator. But 411,500 teaching positions are unfilled or understaffed — and the shortage is global.
The Best AI Tutor Refuses to Answer Your Question
A Wisconsin experiment found that an AI chatbot designed to ask questions instead of giving answers produced the highest exam scores — but only when paired with peer discussion.
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.