Every conversation about AI in schools eventually lands on the same word: Cheating. Teachers worry about detecting it, administrators write policies against it and parents ask about it.
That conversation misses the bigger problem.
Yes, students use AI to write papers they didn’t write and that’s a real concern. But the more dangerous thing happens when a student sits down to genuinely research a topic, types their question into ChatGPT, reads the response, and calls it a day. They aren’t using a database, no sources, no wrestling with a complex text and no thinking.
They didn’t cheat. They just skipped the part that mattered.
The part that matters
Research builds something in students that a ChatGPT summary cannot: The ability to navigate complexity, evaluate competing perspectives, and construct an original argument from real evidence. We don’t develop those skills from reading an AI response. They develop from doing the hard work of research itself.
Students are like water, they will find the path of least resistance. In the age of AI, that path runs straight through ChatGPT. The job now is to design research in a way that keeps the thinking intact.
A workflow that protects the thinking
The framework I use in my classroom treats AI as one tool in a larger research process, not the process itself. Every step requires students to think, decide, and evaluate. AI with the assist, not with the replacement.
Step 1: Map the landscape
Before students open a database, they ask AI: “What are the main scholarly perspectives or debates about [topic]?” AI orients them. It surfaces multiple angles, key questions, and search terms they hadn’t considered. Then students decide which direction to pursue and explain why. Strategic thinking starts here, before a single source gets found.
Step 2: Find credible sources
Students take those search terms to an actual academic database. This step matters because AI collapses here. It fabricates sources. It cannot distinguish credible from non-credible. It has no real access to peer-reviewed scholarship. A real database does. The combination of AI and the academic database is powerful.
Step 3: Read and check understanding
Students read the actual sources. When they hit a passage they don’t understand, they bring it to AI: “I just read this. Can you explain it so I can check my understanding?” Then they return to the article and keep reading. AI scaffolds comprehension without replacing the reading itself.
Step 4: Synthesize
After working through several sources, students ask AI to help them spot connections: “Scholars seem to emphasize A, B, and C. Do these contradict each other or build on each other?” Then students evaluate that synthesis. Do they agree? What did AI miss? That push-back produces the highest-order thinking in the whole process.
Step 5: Produce original analysis
The students write the final product themselves, and the final product cites actual scholarly sources, not AI. They have to argue something. AI helped them get there, but the thinking belongs to them.
What this looks like in practice
We can’t assume students figure this out on their own. They need it taught explicitly, the same way they need anything else taught. Model the process out loud. Build in checkpoints where you can see their thinking, not just their final product. Design assignments that demand original analysis, not just collection.
Now, when a student asks “Can I use AI for this?” the answer is yes, and here’s how.
That reframe changes everything. The goal was never to keep AI out of research. The goal was always to keep students in it.


