*Cover image created with ChatGPT4, Dall-E3, and Adobe Firefly
I recently had a discussion about “generative learning.” It was a phrase I hadn’t heard before. A bit of research and I found it that it is, indeed, a thing–and it was a thing before the generative AI shockwave of 2022. Although it was interesting to read a bit about it, what was perhaps more interesting was the path my brain took before I understood the technical definition.
Before I go on my theoretical tangent…Here’s a basic definition:
Generative learning is the process of transforming incoming information (e.g., words and pictures) into usable knowledge (e.g., mental models, schemas). As such, generative learning depends not only on how information is presented to learners (i.e., instructional methods) but on how learners try to make sense of it (i.e., learning strategies). (pp. 717-718)Fiorella, L., & Mayer, R. E. (2016). Eight ways to promote generative learning. Educational Psychology Review, 28(4), 717–741. doi.org/10.1007/s10648-015-9348-9
Generative learning strategies include summarizing, mapping, drawing, imagining, self-testing, self-explaining, teaching, and enacting.
Generative Learning and Generative AI
OK. Now to my thinking that came before discovering the above.
I see learning as coming to see something differently. This includes shifting perspectives, thinking differently, or acting in new ways. The acquisition model of learning–that we learn through “acquiring” information–could be a first step in this process, but I don’t see that as truly learning unless it is integrated into my own understanding and being–much like what is called for in the generative learning described above.
Given the real estate Generative AI (specifically large language models) have taken up in my head lately, it’s not surprising that the word generative shifted me to AI mode. It made me think of how we learn with AI–not how do we get information from it or how it writes papers for us, but do we actually learn with it?
Note, I’m often skeptical of thinking of learning with a technology as a “new” way of learning. There are absolutely cases where I see technologies transforming the way we learn, but most of what we see as new is just the same old methods dressed up as new tools. Generative AI is a significant technological development, and it helps us do things we couldn’t do before–but could it actually change learning?
To me, changing learning isn’t really “changing learning” unless it is at a theoretical level. For example, we might give a lecture in a classroom or over a Zoom feed, and technically our experience of learning would be different in each case. However, how we are learning–the structures that happen in the setting–are not much different. We are being told something that then we more or less try to integrate into our thinking, schema, behavior, actionable knowledge, beingness, or however else you want to define it.
Truly “changing learning” is most easily seen in various learning theories–the change not only includes a new understanding of how learning happens, but actually redefines learning altogether. It even shifts what we might define as being or acting intelligently. For example, behaviorism describes learning as a permanent change in behavior–thus, to support this type of learning, we would reward certain behaviors until they became automatic. Cognitivism, on the other hand, is more about remembering and integrating information, so learning from a cognitivist perspective focuses on our mental processes of taking in information and memory processes.
A newer theory–Siemen’s connectivism--changes learning to be more instrumental and goal-oriented; we learn through connecting to people or resources, and “our capacity to to know more is more critical than what is currently known.” Being intelligent isn’t about having stored knowledge and skills; it’s about having and making connections that enable flexible understanding and action.
So, could Generative AI truly really change learning? Does it pave the way for a new type of learning theory?
I thought about my recent experiences “learning” Python with ChatGPT. This learning consists of asking ChatGPT to produce code for a task I want to do, attempting to apply it, then returning to ChatGPT for troubleshooting or refinement. With practice, I am gaining experience–both through how to construct my prompts and how to integrate and apply my ideas. I believe I have developed some disciplinary knowledge through the process of generating, refining, and applying. There are some characteristics of this learning that feel different from other learning–though perhaps they are just extensions on other ideas. Some questions I’m asking:
- Is receiving code samples from ChatGPT different from getting samples from a teacher or book?
- Am I simply using ChatGPT to give me information (in the format of code) and then learning from that information?
- Is my process of practicing and evaluating code any different than it would be if I was learning in a more traditional way?
- Am I simply connecting with tools and resources to accomplish tasks, akin to Siemen’s connectivism?
- Is this about learning to code with GenAI?
And some thoughts:
- Getting to the point of being able to create and use code has been much, much faster with ChatGPT than it would have been if I had to start with learning the syntax (commands) and properties and built up from that, or even from needing to constantly refer to code documentation
- All my learning has been directed at a purpose–I have had real tasks I have needed to do, and my learning has been in direct service to those tasks
- I would not call myself “proficient” at Python. I am still missing some foundational understanding of the programming language (for example, properties of data types) and would struggle to code independently. But what does “proficient” mean in this context?
- I am in full control of my learning, able to get help from ChatGPT anytime and in any way that I want. It doesn’t get tired of helping me; I don’t have to wait for someone to answer my questions, etc.
- One thing that might be different from Siemen’s connectivism is that although I think there is some of what he calls “actionable knowledge,” it feels different from his claim of “knowledge residing in non-human appliances.” It is more active than that; it is not information I’m accessing but actual skill or collaboration.
Ultimately, I think this type of “generative learning” I am doing is a combination of other types of learning. It is a bit of direct instruction, as the LLM gives me information directly. It includes constructivism, as I have to put the pieces together and integrate them into a product. The LLM, in its own way, holds some type of knowledge, but that knowledge is available to me through direct application and action, not in a static form. It’s as if the LLM has not just knowledge but a unique type of “actionable knowledge” that then supports my efforts.
It might be most similar to learning by design–learning through iterating, reflecting, testing, and making. But the speed at which I can create a prototype (a code snippet) is much faster than I ever could before, allowing for more rapid iterations. It also helps me put my focus on what I want to do with the code rather than worry about syntax, ultimately focusing on practical utility. And I don’t really care much about being able to do this work independently; the resource is always there for me.
Ultimately, what I love about this learning is that I am in charge; I lead each piece of my process, easily transitioning to getting the information I need (how does a “dictionary” data form work? How do I apply it in my case?), trying it, and fixing errors (debugging). I truly feel I have agency in my learning with ChatGPT being in service to me.
I hope that this is the type of learning we can do with GenAI–where the learner leads and the assistant helps, the learner practices critical thinking while the assistant provides scaffolding, the learner focuses on creation while the assistant provides the details needed for success.