Right, let’s be honest here. Two years ago we were all asking “should we even let AI into our schools?” Now? That ship has sailed completely.
Walk into any secondary school today and you’ll find Year 7s already using ChatGPT for homework (badly, I might add). Teachers are quietly experimenting with lesson planning tools when they think no one’s looking. Half our staff meetings somehow end up in heated discussions about whether AI-generated reports are the future or the beginning of the apocalypse.
The question isn’t whether to use AI anymore – it’s how do we do this properly without losing our minds in the process?
I’ve been working with schools across the country, and here’s what I’ve noticed. The ones getting it right aren’t necessarily the ones with the biggest budgets or fanciest tech departments. They’re the schools that started small, piloted with a few brave teachers, and set actual boundaries from day one.
Most importantly, they had proper conversations about what AI fundamentally can’t do. Spoiler alert: it can’t replace good teaching, no matter what the marketing brochures claim. But what it can do is pretty transformative when you get it right.
The successful schools I’m seeing have moved beyond generic AI tools that don’t understand their context. They’re using platforms that actually know their curriculum, work with their existing policies, and help them spot trends they might have missed. Teachers are finding they can plan faster and reduce that soul-crushing workload we all know too well. Leaders are making data-led decisions with actual confidence rather than just hoping for the best.
Here’s what’s actually working on the ground: staff training that’s practical rather than theoretical waffle, clear policies that teachers can actually understand without a law degree, and leadership teams brave enough to admit they don’t have all the answers yet. Because honestly? None of us do.
The schools that are really struggling? They’ve fallen into one of two camps. Either they banned everything outright (good luck policing that) or they threw the doors wide open without any safeguards whatsoever. Both approaches are disasters waiting to happen, and we’re already seeing the fallout.
What I’m seeing work better is when schools choose tools that are actually built for education, not just adapted from business use. The platforms that understand GDPR compliance aren’t an afterthought, they’re baked in from the start. Tools that help identify and support needs early, rather than just generating more reports no one has time to read.
The difference is stark. Schools using purpose-built education AI are seeing teachers who can actually focus on teaching instead of drowning in admin. Leaders who can spot patterns and act on early warning signs before they become crises. Most importantly, children who benefit from all that saved time being reinvested into their actual learning.
We need to get comfortable with being uncomfortable here. AI in education isn’t disappearing, but we can absolutely shape how it shows up in our classrooms. The key is starting now, starting small, and staying focused on what actually matters – real impact for real children.
The schools getting this right aren’t just saving time, they’re turning that time into meaningful action. They’re supporting progress, improving access, and making inclusion work better for every child in their care.
It’s messy, it’s complicated, and there’s no perfect playbook. But the alternative – pretending this isn’t happening – isn’t really an option anymore, is it?