Jon Fila
English Teacher
Northern Star Online

Ready for another article about AI? Exciting isn’t it?! That’s the message we’re supposed to take from the hundreds of social media posts now inundating our feeds about the 120 Mind-Blowing AI Tools or If You’re Not Using AI You’re Falling Behind! Those are real headlines that have thousands of shares and hundreds of comments. Ridiculous right? Who has the time to follow up on that? No one. It’s all clickbait; or worse, pointing you to a vendor who can solve all your AI issues for you with just the right tool (likely a fancy wrapper for ChatGPT).

Don’t get me wrong. I’m a proponent of embracing these Large Language Models (LLM) in our work. I use them every day. I’m also a skeptic. It is in our best interests to figure out everything we can about these tools; the implications they may have on learning and the learners themselves. My guiding philosophy is that the default version of anything sucks and it is only when we customize something to suit our individual needs, or the needs of our learners, or organizations that we find its usefulness. So what are these default versions of LLM giving us?

Before I answer that, please consider for a moment the last time you experienced a new technology, that by default, supported and lifted up marginalized communities. Maybe you thought of one, I can’t. There is implicit bias baked into just about everything humans make. It’s not always even intentional, but it is unavoidable. Think about these LLM. Who selected the programmers, what biases do they come with? Choices made when coding often further perpetuate existing systemic inequities. Also think about how these models were trained. We uploaded a massive amount of information humans have written on all manner of topics. Biased humans, many with a propensity for racism, misogyny, homophobia, ableism, and more. We can put all the guardrails we want on these tools but at the end of the day, so much of what we’ve written as a species is littered with content that does not align with our current values.

We also know that these LLM have an inclination toward sycophancy¹. They tell us what we want to hear. Some of that is by design and some of it gets worse as the systems are updated. The LLM will also “make assumptions” about the person doing the prompting. This aids the tools in determining just how thorough the response should be. Above all else, humans want to see responses that align with their existing beliefs.

So why do I still encourage people to use generative AI when all of these things are true? It’s because I believe that we can prompt the LLM away from those tendencies by creating a clearer picture of what we are looking for and are not looking for. When I prompt these tools, I ask them to incorporate elements of Social Emotional Learning (SEL) and Universal Design for Learning (UDL) in resources for students; I ask for anti-racist examples with multiple perspectives included; I ask for the use of inclusive language that demonstrates care and support, and too many other prompts to get into with this writing. I also ask these tools to avoid suggesting what I don’t want. I don’t want antiquated and debunked practices around learning styles or left/right brain thinking, and so many other educational myths that persist. It also matters what we tell the tools about ourselves. If I tell them I am an educator, my responses will be more detailed and there are more ways to prompt for more comprehensive and inclusive results.

Do I care which of the hundreds of AI applications you choose or use? Not really, though some are better than others. What I care most about is that you do not turn your thinking over to a machine; and that we are responsible for the material we put in front of learners through a thorough vetting process by subject matter experts with training and background in equity and inclusion initiatives. I care that we are not giving a platform and agency to machines before we even make sure marginalized groups of humans have representation. There are so many other tasks we are giving AI tools to do for us that need thoughtful consideration (observations, data analysis, accommodations/modifications, ect.) Default responses from AI are not what we need, we need thoughtful, caring professionals that are going to use these tools collaboratively and responsibly. That can only be accomplished through much discussion and consideration about the problem that we are trying to solve when we look to these tools. The marketers of these tools aren’t putting those practices at the forefront of what they are doing, that responsibility falls on you. These tools have the potential to change lives; only you will determine if that’s for better or worse.

Jon Fila is an award winning educator and currently teaches English at Northern Star Online. He has written several books on the use of AI in education for students and educators. He provides workshops, and other trainings on how we should be thinking about incorporating these tools into our practices. You can find out more at jonfila.com.

Thank you to Anthony Padrnos and Eric Simmons for coordinating this article submission. Anthony and Eric are the Technology Component Group Representatives on the MASA Board of Directors.
  1. Towards Understanding Sycophancy in Language Models https://arxiv.org/abs/2310.13548

Leave a Reply