What if you could give someone a magical spellâa carefully crafted incantation that allowed them to access a piece of your own magic? You might develop a book of spellsâa grimoireâallowing others to perform your magic. This is exactly the concept that Ethan Mollick has explored in his work. He argues that effective AI prompts encode expertiseâwhen we craft prompts, we embed our own knowledge so that AI can apply it on our behalf. Like grimoires, prompts act as artifacts that reflect and transfer human expertise.
I’ve thought about grimoires quite a bit over the last year, wondering how prompts might differ between experts and non-experts. Recently, I started noticing something elseâwhile developing prompts, the LLM wasnât just embedding my expertise, it was clarifying and building on it in ways that aligned with the theory and practice of teaching and learning. In other words, iterative prompt writing could not only encode expertise but also support its development. The recursive loop between the user and the AI may strengthen the user’s knowledge (may being the key word…).
Using AI to Develop Chatbot Prompts
I have been creating prompts for my class that generate mini-chatbots for specific tasks. Students copy the prompt text into an LLM to create a chatbot, interact with it with a partner, then write a brief reflection on the experience. The goal is to provide them with a scaffolding of expertise to develop their own thinking (you can view them here).
Instead of starting the prompts from scratch, I ask ChatGPT to help me brainstorm ideas for the chatbot and write the prompt, using OpenAI’s canvas editor to refine as I go. I generate a prompt, test it in another chat, then return to refine it based on my observations.
Through this iterative process, I noticed a shiftânot just in the AIâs responses but in my own thinking. The refinement wasnât just about making a more effective chatbot; it was helping me develop and articulate my ideas. This raised an important question: Is prompt writing just a tool for improving AI responses, or could it also serve as a tool for deepening human understanding?
Prompt Writing as a Learning Process
Much of the current research on prompt engineering focuses on optimizing AI responsesâcrafting precise instructions to ensure the best possible output. Iâm not a huge fan of over-emphasizing these techniques, as they may prevent learners from experimenting freely and paying attention to unexpected responses. But the general principles can be useful (I do love Ethan Mollick’s perspective which he writes about here).
However, given that some LLMs are quite capable of generating their own prompts, a shift in focus may be warranted. Rather than seeing prompt writing as merely a technical skill for refining AI-generated content, it could be a reflective process that enhances user understanding. This perspective diverges from conventional prompt engineering, which tends to view AI as a system to be optimized rather than as a partner in human learning.
Last Friday, I was writing a prompt to create a chatbot that would support students in translanguaging (mixing languages to develop both). Instead of simply feeding the AI a static command, I engaged in an iterative dialogueâasking for ideas, refining responses, and adjusting based on my own experiments with the generated prompt. This back-and-forth process deepened my reflection on effective teaching and learning strategies.
I began with this initial prompt (you can see the full conversation here):
âI would like to create a chatbot to support emerging bilingual students in 3rd-5th grade in culturally relevant translanguaging practices. It should integrate academic vocabulary in science and social studies, aligned with Common Core standards. I think it should have some type of conversation with students that includes both languages and academic discussions. I want it to leverage the affordances of LLMs. What types of things might this chatbot do?â
This prompt reflects some of my expertiseâtranslanguaging, academic vocabulary, Common Core standards, and technology affordances. But the LLM didnât just accept my ideas; it expanded on them, suggesting multiple possibilities (some far more complex than I intended). I asked for something simpler, and it proposed a structured approach:
Simple Structure for the Prompt
- Greeting & Context: Start the conversation in a friendly and engaging way.
- Encourage Bilingual Use: Respond in a mix of English and the studentâs home language.
- Academic Vocabulary Support: Introduce and reinforce key words.
- Ask Open-Ended Questions: Encourage critical thinking and engagement.
- Provide Scaffolding: Offer hints, translations, or explanations when needed.
Notice that I did not explicitly tell it to be friendly and engaging, nor did I specify open-ended questions or scaffolding. These are all hallmarks of good teaching that the AI automatically incorporated into the prompt. It further refined the approach by prompting students based on their own academic interests, reinforcing a student-centered learning approach.
To refine the prompt, I copied each version into a new chat and tested it. When I returned to the original prompt-writing chat to request modifications, I noticed ChatGPT implementing my ideas with more specificity than I had suggested. For example, when I wanted the chatbot to better support students in their non-native language, it added:
âProvide sentence starters and scaffolded support to encourage practice.â
This is a fantastic way to build students’ confidence in language learning. When I directed it to be culturally responsive, it included:
âIf the student mentions something from their background, connect it to the topic.â
I can imagine an early-career teacher recognizing the importance of culturally responsive teaching but struggling to implement it in practice. This instruction would provide a clear way to apply the principle effectively.
So What?
What made this experiment unique wasnât just the final chatbot prompt (which you can find here) but the learning that happened along the way. As I engaged with the AI, I found myself reflecting on core pedagogical principles. The AI wasnât just respondingâit was refining my ideas and filling in gaps with a surprising degree of expertise.
This experience raises an intriguing question: Can AI help teachers not only implement effective strategies but also build specialized knowledge? My experience suggests that engaging with AI in a thoughtful, iterative mannerârather than simply using it as a content generatorâcan be an avenue for professional growth. However, would this still be the case for new teachers who may not recognize the same nuances I do? Would they write their prompts differently, potentially missing the cue words that guide the AI toward deeper pedagogical insights? I’m off to find out.