There’s always a lot of hype around new technologies–my favorite example was how the “educational talking picture” was going to be epochal:

The introduction of the use of the talking picture into education may prove to be an event as epochal as the application of the principle of the wheel to transportation or the application of steam power to the industrial age. No development in education since the coming of the textbook has held such tremendous possibilities for increasing the effectiveness of teaching as the educational talking pictures.

Devereux, F. L., Engelhardt, N. L., Mort, P. R., & Stoddard, A. J. (1933). The educational talking picture. Chicago, Ill.: University of Chicago Press.

Of course, I have also written that while it is easy to see this quote as overblown today, if we look at the broader context, the educational talking picture has indeed had a significant impact on how and what we learn today.Ā 

I’ve wondered about whether generative AI will be another “educational talking picture”. Will the current excitement and fear over AI seem silly 50 years from now? Will it be easy to see how it changed education?

The above is all a preamble to what I really want to talk about here: generative AI and technology integration models (like SAMR, PICRAT, TPACK, etc.). And whether genAI is truly unique, requiring different approaches than our current integration models offer.

As I’ve worked more and more with generative AI, using it in my courses and personal life, I’ve noticed the psychological uniqueness of this technology. It seems like a supremely confident human (drunk intern..) who knows everything, and, as Punya Mishra has pointed out (here and here), we can’t help ourselves from the dreaded anthropomorphization (ChatGPT assures me that is a word).

But then I work with it and see how “average” it is, how it regresses to the mean. I’m writing this post with its help, and it frequently veers into common language and discourse around genAI, ideas that I’m glad others research (for example, prompt engineering) but that I am trying to critique and extend. I have to push myself to really think carefully about each part so I’m not being “average” (though it sure sounds good on the surface), which often means re-writing long passages from scratch. But AI output can sure seem good, particularly to my students who do not yet have the specialized knowledge I developed in a pre-genAI world.

So what am I getting to here? 

I’ve come to the conclusion that GenAI is truly a unique technology that calls for a new paradigm and that means carefully re-evaluating our technology integration models. Will thinking through these models help us support teachers in integrating genAI creatively and effectively? Do we need new models to push our thinking? Or can slightly different perspectives on existing models help us see Ai differently?

Here I’m going to focus on SAMR–because it is what brought me down this track–but if this exercise proves productive, I will look at other models in future posts (an article by Punya Mishra, Rezwana Islam, and I addresses our early thoughts on AI and TPACK).

The SAMR model breaks down technology integration into four stages: Substitution, Augmentation, Modification, and Redefinition. Traditionally, this model has provided educators with a structured way to think about incorporating technology into their teaching. At the substitution level, technology acts as a direct tool replacement with no functional change, such as using a digital document instead of paper. Augmentation takes it a step further by introducing some functional improvements, like using collaborative tools for real-time feedback. Modification allows for significant task redesign, enabling students to engage with content in new ways, such as multimedia projects that would have been difficult without technology. Finally, redefinition involves creating entirely new learning experiences that were previously inconceivable, such as virtual reality simulations or global collaborative projects.Ā 

Importantly, when I teach SAMR I tell students that I thereā€™s not necessarily anything wrong about using technology for substitution or augmentation, but we also want to help them move to the transformation and redefinition levels so that we can improve learning as well as meet the needs of today’s students, including the digital and global skills they need to be successful.

But what does this mean for AI? AI is specifically designed to replicate human ability (well, eventually surpass it, but let’s ignore that for now). It (at least seems) very good at doing things like lesson planning, creating instructional materials, evaluating and giving feedback on student work, etc. Which makes it unique from other technologies.

Thus, we automatically lean towards replication uses of AI, it is like a human after all. Much of the current research seems to be focused on these uses: research about whether AI can match human ability (replace humans and tasks traditionally done by humans) and how to help humans work with it to do the same things they’ve always done. For example, a recent study focused on how to help pre-service teachers use AI to create lesson plans. I have no problem with this in general–I am doing this right now too–but we need to go beyond replication to the underlying cognitive and societal impacts this can have on students and the educational system.

Now here’s the crux of my argument: when AI is used to replicate, it automatically amplifies (a form of augmentation) and this amplification may not be what we want. It makes traditional human tasks faster and easier to repeat at scale. It makes it simple to produce the same kind of lesson plans we’ve always produced, evaluate student work in the same way, unintentionally reinforce the same inequities that exist in its training data. 

It also amplifies at a system level. If teachers can do the same things they’ve always done faster, are we going to expect them to do more of the same? Are we going to follow the same pedagogical patterns because they are so easy to do now? Are we going to make larger class sizes, ask teachers to teach more content, etc. because we have streamlined this process? Are we going to continue this cycle of requiring more and more of teachers while given them less and less respect, failing to recognize the expertise they bring to the classroom? Afterall, from this paradigm, AI can do much of their job. I sure hope not.

I believe this amplification/augmentation is also what is underlying AI angst. When AI easily does human tasks quickly, things get uncomfortable. We worry about students cheating and job security. Yesterday, I found myself wondering if I should really give my students an AI prompt that seemed to basically do an assignment for them–how do I ensure they are actually thinking and learning if it seems to so easily do the thinking for them?

So. With SAMR. It’s super simple to replicate. And replicate automatically amplifies. That is scary. Does this mean that we really, really, really need to be pushing to the modification and redefinition stages? What does that look like?

At this moment in time we should be exploring this on multiple levels. How does AI modify our traditional pedagogical approaches? What does it mean about what today’s learners need for the future? What does it mean about the structures of our educational systems and culture?

In summary, SAMR can provide a powerful starting place for considering GenAI technology integration. However, Iā€™ve slightly reframed it hereā€”showing how itā€™s ability to ā€œreplaceā€ humans makes it so incredibly tempting to use it for substitution, and how this substitution automatically augments not just educational tasks but the system itself.

This leads us to how vitally important it is to focus on modification and redefinition. Iā€™m not sure what that looks likeā€”but Iā€™m excited to explore!