You’ve probably seen lots of metaphors comparing generative AI to a superpower or a genie in a bottle. These aren’t quite right.
If you’ve used gen AI, you know that the “magic power” feeling is fleeting. Yes, it can come up with a pretty good Wikipedia-style explanation of amoebas or utilitarianism, but as soon as you want to produce something really interesting with ChatGPT and other large language models, things start go a bit wrong.
And that’s a good thing. AI’s shortcomings are a blessing: They give us a really good reason to continue thinking for ourselves.
The same goes for students. And we can leverage this fact (that we mustn’t stop thinking for ourselves) to try to make our students much more thoughtful in how they use AI.
As an academic adviser and learning developer, I speak often with students about their uses of AI. In the early ChatGPT days, students didn’t want to talk about it. I think they were worried they were doing something wrong. But recently they’re opening up. What they say has surprised me.
Students’ approaches to AI run the full gamut. This includes those who have no experience or interest, those who cheat or take shortcuts that are bad for learning, and, wonderfully, a surprising number who have a really well-developed and self-reflective view of their relationship to AI.
I’m optimistic that this last group will actually become the mainstream. Educators can help move this trend along if we can collectively change the culture around AI, model healthy AI uses and embed critical AI literacy into education.
I’ve taken up that call in the form of a book called AI for Students (Promptly, 2024). The book demonstrates how AI tech can be used not as magic powers, or even as tools, but more as interactive spaces, or virtual classrooms.
In this type of scenario, students learn to use the technology, not to outsource their brains, but in fact to think more creatively and actively—and to have fun along the way.
What does this look like? It’s all about what you ask the bot to do. Don’t ask it to do stuff for you. Ask it to ask you to do stuff. Here’s a little experiment you can try to contrast the two approaches.
First approach: Ask an AI chat bot to write you a sonnet.
Second approach: Write a verse of sonnet yourself. Enjoy it. Have the bot critique it. Think about that critique. Revise your verse if you wish. Reflect on the interaction.
I did this process, and the two experiences were starkly different. The AI sonnet was of course impressively fast and adequately sonnet-like. But the thing is: I stopped reading after line three, “The winds do carry whispers on the breeze.”
It’s just not that interesting to read mediocre poetry, especially when it comes from something that, as far as we know, is not conscious. It’s boring, and that’s a good thing. Because the second approach was a totally different experience. Here is how that went:
I wrote my own dodgy first verse and, though it wasn’t good poetry, it was very good fun. I haven’t thought about sonnets in ages, and here I was enjoying the rhythms, the imagery, the feeling of friction in my brain from doing something hard.
None of which I got from the previous approach.
Now, when the bot gave me feedback on my verse, it pointed out weaknesses in coherence and clarity of image. It was right. Sure, I was kind of aware of these weaknesses already, but the fact of the interaction made me think harder about my choices. The feedback made me take my writing, though it was just a lark, a little more seriously.
It also made me think harder about what “coherence” and “clarity” even mean as criteria … and were these criteria I valued and wanted to accept? And was the bot’s definition of these terms the same as my own?
And on it went: a real, live learning process. A mini writing workshop. A creative journey. A dabbling with an art I’d never meant to dabble in. And now I’ll go around the world a little more aware of sonnets, and meter, and ideas about coherence and clarity. And I’ll have fun with that.
So here’s the idea you can put into practice: Let’s drop the notion of AI as a magic power. Let’s even drop the notion of AI as a tool. This minimizes its complexity. Instead, let’s make it a space, a frame, a situation, a set of events that stimulate thinking and keep us engaged.
Show your students how to do this. Show them how to get the bot to quiz them. Or nudge them for better evidence. Or tighter arguments. Or to see things from different perspectives.
How do you do that? Here’s the basic ethos: Don’t prompt the bot to create content; prompt the bot to prompt you. And repeat.
More specifically, you can follow three steps: First, give the bot an aim, like “I need to practice forensic accounting vocabulary.” Second, give the bot a role, like, “OK bot, you’re an accounting expert, and you’re going to give me questions that make me use this vocabulary.” Third, give the bot constraints and formats, like, “Don’t give me answers, just point out weaknesses in my definitions and prompt me to revise my own answer.”
And that’s mostly what you need to know about the mindset and the technique of changing AI from being a genie in a lamp or a magic content mill to an interactive space for learning. A space where the learner’s mind is central, engaged and working hard.
Try it, join the conversation and together let’s make these fabulous new technologies a fun, productive and humane piece of the complex system of education.