The first time a student told me, “This sounds smart, but I don’t think it’s right,” in response to an AI-generated answer—I knew we were onto something.
That moment sparked Fray-I—a thinking routine I’ve been developing to help students analyze AI responses, not just accept them. It’s still a work in progress, but it’s already changing how my students interact with both history and technology.
Here’s the flow:
- Students engage with content – a primary source, textbook excerpt, or short video.
- They ask a question based on the reading or viewing—either one they create or one I provide (especially if the source leaves something unanswered or unclear).
- They run that question through an AI tool like ChatGPT or MagicSchool.
- They get a response and analyze/evaluate.
Here’s what Fray-I looks like:
- Claim: What is the AI saying? What’s the main idea or argument?
- Evidence Used: What support, facts, or examples does it include?
- What’s Missing?: What voices, perspectives, or key historical context are left out?
- Push It Further: How could this answer be stronger? More accurate? More complete? Would you use this response?
This turns AI into the text—not the shortcut.
Students question the bot like they would a biased newspaper article, a government document, or a historical speech.
Why Fray-I works:
- It puts students in the driver’s seat. They’re not copying—they’re critiquing.
- It reinforces essential social studies skills: sourcing, bias, perspective, and evidence-based reasoning.
- It meets students where they are—working with the tools they’re already curious about.
And honestly? The engagement is different.
When students start noticing what the AI got wrong, what it ignored, or how it could be improved, they feel ownership.
Fray-I isn’t finished. I’m still tweaking sentence starters and scaffolds to support all learners. But it’s already doing what I hoped: Helping students think like historians in a world where information is instant—but not always insightful.