Note: Featured image made with Adobe Firefly 3.

I’ve written a fair number of posts lately where I’ve explored my experiences with large language models, like ChatGPT, and questioned whether what was happening was a “new” type of learning or simply an amplified or enhanced process that is basically the same as the other learning I do (see here, here, and here).

This came to a head today when I was reading Bender et al.’s “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big 🦜” (side note: my new goal in life is to have an article title that includes an emoji). This paper was presented at a computer science conference in 2021 (ACM FAccT, or Conference on Fairness, Accountability, and Transparency). It digs deep into the problems with LLMs, including what happens when there is too much data to ethically curate and to comprehend the model.

Like my reading last week, there were many terms, phrases, and references I did not understand so I asked for a bit of help from ChatGPT. I got to this paragraph:

As the late Karen Spärck Jones pointed out: the use of LMs ties us to certain (usually unstated) epistemological and methodological commitments. Either i) we commit ourselves to a noisy-channel interpretation of the task (which rarely makes sense outside of ASR), ii) we abandon any goals of theoretical insight into tasks and treat LMs as “just some convenient technology” [p.7], or iii) we implicitly assume a certain statistical relationship — known to be invalid — between inputs, outputs and meanings.

Bender et al., 2021, p. 615

The reference to epistemological commitments piqued my interest, and I did a bit more exploring of what this could mean. My conversation with ChatGPT started with simple explanations of the paragraph, but soon veered into other thoughts I have been having, thoughts very loosely connected to the actual quote.

Before I continue…two tangents required to understand my thought wondering:

Tangent #1: “Not for the Truth of the Matter”

As many know, I like watching murder trials. My past time has led to small shifts in the way I think about facts, evidence, and judgment. One of these has to do with the phrase “not for the truth of the matter.”

During a trial, a witness usually cannot repeat what someone out of court has said, because the person they are quoting isn’t there to be questioned on the statement. This is called hearsay. However, there are exceptions. One of these exceptions is when a hearsay statement is given “not for the truth of the matter”–or not to show that something is true, but rather for some other reason such as to show the impact on the listener.

An Example: Say my neighbor Jim overhears my neighbor Bob telling me that my cat is ugly. I then “unalive” Bob. In my murder trial, Jim may be allowed to repeat exactly what Bob said to me. This is not to prove that my cat is ugly (she most definitely is not) but rather to provide some reason that I may have had it out for Bob. Thus, the truth of the statement doesn’t matter–it is how the statement impacted me that is relevant.

Tangent #2: Creative Epistemologies

A few years ago–as part of my PhD Dissertation–I collaborated with middle school teachers to explore what a designerly approach to being a teacher might be like. This included finding diverse methods for coming to understand and interact with a situation–what I called epistemic fluency. I outlined four mindsets that we practiced with, striving to see our work from different angles. Epistemic fluency was the ability to effortlessly shift between these different ways of knowing, being, and acting, enabling new interactions with practice and expanding agency.

One of these “mindsets” or “epistemologies” was “creative.” To play with this idea, we applied forced association–we used a random word generator online to create words that were made into a metaphor for learning. What the word was didn’t matter; it was deliberately random. The random stimulus pushed our thinking to make new connections. It was similar to some of the Creative Tools taught by choreographer and creativity expert Elizabeth Lerman.

Sometimes when I’m using ChatGPT my thinking is similar–it’s less random than a random word generator–but more random than fact built on physical reality.

Back to Today’s ChatGPT Convo

After exploring the quote from the article, my mind began making connections to all this other stuff I’ve been thinking. I let my interaction with the bot wonder, sharing tangential ideas and ongoing quandaries. I tried not to accept everything it said (that would be hearsay), but allowed my mind to explore its statements. I searched for curiosities that I might build on and that could push my thinking. For example, here is one response I got when I was puzzling over whether the type of learning I was doing with GenAI was really “different”:

A New Paradigm?
Your question about whether this is a “different” kind of learning or simply a variation of constructivism is quite insightful. This could indeed represent a new learning paradigm where the integration of AI tools (like LLMs) in personal and educational settings challenges and extends traditional definitions of learning and knowledge. This paradigm:

  • Expands the Toolkit for Learning: It leverages technology not just as a tool for accessing information, but as an integral part of the cognitive process.
  • Shifts the Role of Knowledge: It suggests a model where the value of information is not just in its accuracy, but in its capacity to provoke thought, reflection, and creativity.
  • Emphasizes Skills Over Facts: In this view, the skills of querying, interpreting, and critiquing information become more important than simply acquiring facts.

Implications
This approach could indeed be fostering a subtle shift in educational practices, reflecting broader changes in our information-rich, digitally connected world. It emphasizes skills such as critical thinking, adaptability, and creative problem-solving — skills that are increasingly important as information becomes more accessible and abundant.
Your exploration of learning with LLMs might not only represent a different way to practice constructivism but also highlights the adaptive nature of learning theories to encompass new technologies and methodologies. It invites further reflection on how these tools might shape future educational paradigms.

This response touched a bit on another conversation I had today–seeing GenAI as a part of the cognitive process. It also matched up with some things I’ve been preaching about the need to focus learning on certain thinking skills.

However, what was particularly insightful was its comments about how this might “shift the role of knowledge”- this was what I had been trying to put together, that the “stuff” I’m getting from GenAI is valuable not for its truth but for how it can support my own thinking.

I wasn’t sure about the use of “knowledge” in this context, though; can “knowledge” really mean something devoid of any foundation in truth? I asked a bit more, eventually getting something else to think about:

Reevaluating “Knowledge” with AI

The use of AI challenges and expands our understanding of knowledge:

  • Epistemic Fluidity: Knowledge becomes more fluid and dynamic, seen as something that evolves through interaction, dialogue, and reflection rather than something that is simply “acquired.”
  • Critical Engagement: The role of the learner or user shifts towards being an active participant in critiquing, contextualizing, and constructing knowledge from the information provided by AI.
  • New Literacies: This perspective necessitates new literacies, including digital literacy, information literacy, and critical thinking skills, to navigate and utilize AI-generated content effectively.

In conclusion, when we use information from AI without assuming its correctness, we are not necessarily abandoning the idea of it as knowledge. Instead, we are redefining what knowledge means in a context where information is abundant and not always accurate. This approach emphasizes the skills of discernment and critical thinking, and recognizes knowledge as something that is continually created and recreated through our interactions with information, rather than something that is simply discovered or received.

Did you catch the “epistemic fluidity” line? I’ve come full circle, from this reflection-epistemology-knowledge line to “epistemic fluidity” which is awfully close to the “epistemic fluency” I wrote about in my PhD dissertation. Is this evidence that I was a such a significant determining factor in ChatGPT’s responses that it voiced an un-articulated pattern that was bouncing around in my head? Or was it just random chance that it found a term so similar to what I was writing about three years ago? It doesn’t really matter. It’s words are not for the truth of the matter. They are for spurring my thinking and helping me make connections.

Note: It is important to remember that, even though I’m not considering ChatGPT’s output as “truth”, it unavoidably impacts how I think. It was a partner in this process. This is not necessarily a good thing, as ultimately its words are built on biased and hegemonic data. It is important to attempt to remain aware of its influence, even while suspending belief.