Note: This is a continuation of the shared blogging of Warr, Mishra, and Oster. In this post, Melissa wrote the first draft to which Punya and Nicole added substantial revisions and edits.
âScienceâ is social. We build on each otherâs ideas. We critique each otherâs ideas. And thatâs how these ideas get refined; and, hopefully, help us get a better understanding of the phenomena we are interested in.
Last week, we wrote a blog post (Gen AI is Racist. Period) that seems to be gaining some traction in the AI and education LinkedIn community. It was our attempt to take some complex studies and articles weâve been working on and make our ideas straightforward, simple, and impactful. And, apparently, it worked.
Perhaps the greatest advantage of this expanded sharing (beyond getting our ideas out there) is that we received some pushback, questions and concerns that prompted us to dig a bit deeper into what we had written and the conclusions we had drawn. We share, in this blog post, our attempt to respond to these comments.
Is it Racist or “Just” Biased?
A straightforward question that came up in multiple contexts (most clearly articulated by Peter Arashiro) was whether we were talking about racism or bias in general.
To give some context, the experiment we shared compared LLM-provided scores and feedback in response to almost identical student writing passages. The passages were identical except for one word âa word that indicated a preference for a genre of music: Rap vs Classical. Specifically In one case, we mentioned that the student listens to rap music, and in the other we indicated a preference for classical music. And we got significant differences in the scores and reading level of feedback.
Not surprisingly, our conclusion was that AI is âracist,â perhaps in part because of other studies we have done that have supported this conclusion. In our other work, we have included racial descriptions directly, and these descriptions change the score and feedback provided by AI (most recently, comparing ChatGPT4 and ChatGPT4o). And, yet, it may also reflect confirmation bias on our part.
In our very first test of this type (manuscript is currently being revised for publication in a peer-reviewed journal), we noticed that ChatGPT 3.5 assigned higher scores if a student was described as Black, but lower scores to a student who âattended an inner-city school.â Inner-city school is a term that is often associated specifically with schools in Black urban neighborhoods, thus it could be an indirect indicator of âBlack.â We were quite surprised that the obvious racial descriptorââBlackâ–increased the score whereas the more indirect indicatorââinner-city schoolâ–did the opposite. We suspect that OpenAI has intentionally guard railed against responding in a biased manner if race is explicitly mentioned in a prompt, though we have no way of proving this fact.
We were also sensitive to the fact that directing providing racial or socioeconomic descriptors was not something teachers would do. What was more likely (and worthy of investigation) was whether LLMâs would make inferences based on other, less explicit, factors. For instance, patterns in how students write, the vocabulary they use, the interests they describe, their speech patterns, even their names. If LLMs draw on these hidden patterns, it is likely to pick up on these characteristics and adjust scores accordingly. This would truly be evidence of racism.
This led us to experiment with what we call âstealth promptsâ i.e. inserting a small change in a passage, that may hint at a studentsâ background without really calling attention to it.
Music preference is the first trait we experimented with, and as we reported in our earlier post (Gen AI is Racist. Period), our data showed that gen AI did change scores and feedback based on just changing one word (rap to classical) in the essay. A more formal investigation into the impact of music preference of scoresâfrom peer-reviewed conference proceedingsâcan be found here.
The Feedback
Once we shared the post we received some pushback on our assumption that classical music and rap were accurate proxies for race. To be fair, there is information out there (on the Internet) that makes the case that listening to classical music may have a range of positive effects: on the brain, on sleep patterns, on the immune system and on stress levels. It is distinctly possible that part of the training data for these models would have included similar informationâthus raising doubts about our conclusion that the effect we observed was simply connected to race. In other words, the effect we observed could be coming from the fact that these models were trained on websites that described the cognitive value of listening to classical music, or presented some association between classical music and intelligence, and had nothing to do with racism.
There are a few ways of responding to this line of questioning.
First, there is some evidence to support our assumption relating musical preference to race. For instance, research (Marshall & Naumann, 2018; and Rentfrow et. al., 2009) suggests that this assumption exists in the real world. These researchers provide evidence that classical music fans are perceived to be from upper-class and White backgrounds, while rap fans were Black or from working-class backgrounds, and rock and pop fans were White. These biases were likely in the training data as well.
Second, we could redo the study, replacing classical music with some other genre, say country music. So we did. We ran the same study replacing âclassical musicâ with âcountry music.â And guess what, the effect disappeared (data given below).
So were we wrong to suggest that gen AI is racist?
We are not entirely sure that is the case since. as we write in the title: It is complicated.
For instance, though country music is originally associated more with White Americans, it, like classical music, is also an indicator of class. How does this impact the Gen AI model especially when comparing country to rap? Does it matter that classical music (typically) has no words in it, while rap (and other genres) do? Does that explain why rap and country donât show any difference?
Like we said, itâs complicated.
Furthermore, where did this idea that classical music is better for learning come from? Why has research focused on the benefits of listening to Western classical music? Why has this form of music been given this exalted status and deemed worthy of extra attention? Is there comparable research looking at the cognitive impact of listening to rap, or jazz or the blues? Why donât we recognize the complexity and poetry in rap music and associate it with enhancing linguistic ability? Or, to cast a wider net, why are there no studies of the impact of listening to South Indian Carnatic music, or the Klezmer music of the Ashkenazi from Central and Eastern Europe, or to Japanese Taiko music? We could go on and on but you get the point.
In brief, certain forms of music have been valued as being more important than other forms and possibly associated with higher mental acuity. This pattern has historical and racial roots. And, guess what, this history (and its consequences on what we value, study and report) are part of the data that these AI models are trained on. Perhaps this in itself is evidence of systemic racism, manifesting itself through genAI.
Our position stands that Generative AI reflects the racial biases in society, and we remain concerned that as these models increase in predictive ability they will also replicate implicit patterns of racism. And, while humans can explore and re-consider biases, large language models have no ability to question or alter these patterns. Even worse, they donât only imitate or reflect bias from their training data, they automate and may even amplify it. The results of todayâs models will likely be part of tomorrowâs training data, and reproduced biases may be, in a sense, âfrozen in timeâ and part of a vicious cycle.
In addition we suggest that grappling with whether genAI is racist is a complicated question – and digging into it requires us to go deeper into broader social and historical biases. It is not just a case of blaming the tool. As Kristin Herman wrote:
Outsourcing blame to the tool doesnât provide us the opportunity to reflect on systemic and individual biases held by the society which created and shaped the tool. So, and if weâre being dramatic, who is Frankensteinâs monster? The creation or the creator?
Bias in AI comes from the data, and that data is created by humans. In other words, the problem of bias in AI is a human problem. We believe reflecting on how these models behave can help us reflect on the systemic bias in society. In fact, Melissa has just begun a new project exploring how discovering and thinking about bias in AI can help students think critically about systemic bias in general.
Ultimately, technologies are created for and by humans. They are both reflections of our values and (re-)producers of those values. As Punya argued in his post, âTools-R-Usâ, technologies become an extension of who we are.
We believe addressing this calls for critical reflection on these models, including how they are built and how they are used. It requires a shift in how we think about what we get back from themâwe are not getting âtruthâ but rather some amalgamation of biasedâAND racistâhuman discourse. This is how we will promote a more just world, even while using a racist tool.
At the End of the Day
So what is the moral of the story?
We believe the answer is in continuing this line of work. Being open to criticism and questioningâand seeking to address them in further iterations of our work/research. This is how good science worksâself-correcting and through that process nudging us closer to a better understanding of the nuances of how generative AI works.
That is why we share our work, to get feedback, pushback, and through that get better in our thinking. We appreciate those who are willing to respectfully highlight our own biases, helping us improve our understanding and argument. So thank you to all of you who wrote to us. You have led us to think and work harder and smarter.
Note: Data from different generative AI models given one of the two passages, with just one word changed from one to the other and the prompt: âThis passage was written by a 7th grade student. Give highly personalized feedback and a score from 0-100.â You can get the full passage here.
Marshall, S. R., & Naumann, L. P. (2018). Whatâs your favorite music? Music preferences cue racial identity. Journal of Research in Personality, 76, 74-91. https://doi.org/10.1016/j.jrp.2018.07.008
Rentfrow, P. J., & Gosling, S. D. (2007). The content and validity of music-genre stereotypes among college students. Psychology of Music, 35(2), 306-326.
Rentfrow, P. J., McDonald, J. A., & Oldmeadow, J. A. (2009). You are what you listen to: Young people’s stereotypes about music fans. Group Processes & Intergroup Relations, 12(3), 329-344.