Last month I attended the Learning and the Brain Conference with a focus on AI, and one report kept coming up in different sessions: a new study from the Brookings Institution that every educator should read. You can find the report here. After a yearlong global study covering 505 interviews across 50 countries and 400+ research studies, the Center for Universal Education reached a striking conclusion: right now, the risks of AI in education outweigh the benefits.
What Brookings Actually Found
The report frames its findings as a ‘premortem’: studying potential failure modes before they become permanent. If we stay on the current trajectory, Brookings identifies three core risks:
- Cognitive shortcuts that undermine learning. Students outsource their thinking to AI instead of using it to deepen it. When AI does the work, the learning doesn’t happen.
- Eroding trust between teachers and students. AI-generated work creates relational damage: teachers unsure what’s authentic, students unsure what’s acceptable.
- Widening the equity gap. Indiscriminate AI adoption doesn’t level the playing field. Students with stronger foundational skills benefit most; those without fall further behind.
The good news is that none of this is inevitable, but it is where we’re headed if we treat AI adoption as the goal rather than the tool.
How I Think About It
The framing I keep coming back to is using AI as a copilot, not autopilot. AI should help students do more rigorous thinking, not do the thinking for them. That may mean flipping how we sequence learning. Instead of starting at recall and working up to analysis, start students with evaluation. Give them AI-generated content and teach them to interrogate it, fact-check it, and improve it. The cognitive load shifts from producing an answer to judging one.
That shows up across three frameworks I use:
- Interrogation Over Memorization – AI output becomes the starting point for student analysis, not the finish line. Students receive AI-generated content, then validate, challenge, and revise it. The AI produces; the student judges.
- Collaborative Creation Over Individual Production – Brookings is clear that learning is fundamentally social. When students work together to evaluate AI outputs, human relationships stay at the center of learning, with AI in a supporting role.
- Melioration – Combining AI output with human judgment to produce something better than either could alone. Students aren’t outsourcing their thinking; they’re exercising it on more complex material than they’d have access to otherwise.
The Arc Is Ours to Bend
Brookings closes with a line worth repeating:
“the trajectory of AI will be determined not by fatalism or passive acceptance, but by the deliberate choices and sustained efforts of all of us working together.”
It’s definitely a warning, but we can also see it as an invitation. For me personally, I’m working to make sure that when AI enters my classroom (and it will), I’m steering it toward enrichment rather than diminishment. The risks are real, but so are the opportunities.



