Can AI Foster Equity in Education?

Across the ˻ֱ College of Education, Students and Faculty Are Exploring ˻ֱ to Harness AI to Create More Equitable Opportunities for All Learning Levels. The Key? It’s in the Way that You Use It.
Endeavors 2024 cover

Artificial intelligence (AI) has been called many things, but “equalizing force” probably doesn’t leap to mind for most people. Then again, most people don’t have the same life experiences as University of ˻ֱ graduate student Muhammad Fusenig, who temporarily lost his ability to read, write and understand language after suffering hemiplegic migraines in his early 20s. 

“There were many times when I could not adequately express myself or find the words that I knew I had the capacity for,” said Fusenig, who is pursuing a Ph.D. in educational psychology. “It was an incredibly limiting experience.” 

As an undergrad who studied political science at the University of California, Davis, Fusenig initially aspired to a career that would draw on the very skills he was struggling with: speaking, writing and reading. But as he began having frightening stroke-like symptoms shortly after undergoing surgery for a spinal injury, his focus shifted to understanding the mechanisms behind language and how AI-based language-processing tools might help him and others experiencing similar difficulties. He developed an assistive writing program to aid him in finding the right words when his brain wouldn’t deliver them. 

His interest in AI brought him across disciplines—and across the country—to study the practical and theoretical aspects of artificial intelligence at the ˻ֱ College of Education. “There’s a lot of moralizing that goes on with [AI],” said Fusenig, who is evaluating how college students are using his AI-based software and what their motivations are. “But we don't really know why somebody’s using it in school or out of school,” he added. “To me, AI seems like a very equalizing force.”

Fusenig is part of a small team in ˻ֱ’s , headed by Patricia A. Alexander, Distinguished University Professor and world expert on text-based learning, knowledge development and reasoning. His story underscores that AI, like so many transformative technologies, cannot be reduced to a single word like good or bad.

In other words, people’s common fears that AI can be used for cheating and learning shortcuts are true. Yet equally true is the reality that AI can help level the playing field for historically excluded students or empower teachers in marginalized communities with effective new tools they wouldn’t be able to access otherwise. 

“Context is everything,” explained Alexander. “Any form of AI has its pros and its cons, and one of the things we have to be sensitive to and aware of is, how do we prepare students to use those devices in a way that proves them to be extremely facilitative?” 

Alexander and Fusenig are among the many experts at the ˻ֱ College of Education looking to harness the power of AI to foster equity at all levels of education. While some faculty and students are focused on providing fairer learning experiences for historically excluded students in STEM (science, technology, engineering and math), others are exploring how to expand access to training among K-12 teachers. What they all have in common is a nuanced understanding that, given the explosive growth of AI, it’s incumbent on educators to lead the way to its responsible integration in learning spaces everywhere.

Generative AI to increase students’ confidence and belonging

For David Weintrop, associate professor with a joint appointment in the College of Education’s Department of Teaching and Learning, Policy and Leadership, and the College of Information, studying AI in the context of computer science was a logical choice. “People are often surprised at how incredibly powerful generative AI is when it comes to writing code,” he said. “You can give tools like ChatGPT very high-level, abstract prompts, and they can produce functioning code that can do lots of things that otherwise would take a very long time.” 

Weintrop is coleading a project with Assistant Professor Joel Chan in ˻ֱ’s College of Information to evaluate whether the widespread availability of large language models like ChatGPT is helping or hindering college students from historically excluded backgrounds (those who are Black, Indigenous or people of color; women or nonbinary; English language learners; or first-generation college students) in learning introductory programming. The project is funded by a $60,000 grant from Google and a $50,000 grant from ˻ֱ’s Teaching and Learning Transformation Center.

Weintrop and the team want to understand how these learners perceive generative AI. On one hand, they theorize that the students may find these tools build their confidence by augmenting learning. On the other, the researchers also think it’s possible the learners could find generative AI alienating. For example, if ChatGPT can enable good grades without learning, historically excluded students might experience an amplified sense of imposter syndrome. They may feel like, “I'm just faking it and using these other tools, and I still don't know what's going on,” Weintrop said. 

In Spring 2024, the team collected baseline data from students in introductory computing classes. In the fall, they’re rolling out the same course, but this time integrating generative AI tools into the design of the class. Ultimately, they’ll measure the impact of the large language models on the students’ interest, confidence, self-efficacy and sense of belonging. 

Although the study won’t conclude until December, Weintrop has already made some intriguing observations, noting that most students don’t want to use AI as a shortcut to learning. “They wanted to develop a deep conceptual understanding and fundamental proficiency in learning to program,” he said.

But that doesn’t mean they can’t benefit from generative AI at all. “Students are going to be graduating into a world where those tools are accessible,” Weintrop said. “So it makes sense to me that they learn how to use them in responsible and rigorous ways as opposed to just pretending they don't exist.”

Janet Shufor Ph.D. ’24–who recently earned her doctorate in teaching and learning, policy and leadership, specialization in technology, learning and leadership, and worked in Weintrop’s group–explored similar questions about the impact of generative AI on historically excluded K-12 students learning to code. She found that even younger learners were largely able to self-moderate their use of AI. “They don't rely on the tools to write the code because they’re aware they have to learn to code themselves,” she noted.

Natural language processing to guide educators

Natural language processing (NLP) is a component of AI that allows a computer program to understand human language as it’s spoken or written. In his project, which is funded by a from ˻ֱ, Assistant Professor Jing Liu and his team are using NLP to analyze transcripts from K-12 mathematics classrooms and provide specific, actionable feedback to help teachers improve. For example, they’re exploring how often the educators interact with students or take up their ideas. 

“I generate the feedback as a way [to guide] teachers’ professional learning, so that, for schools and classrooms that don't have a lot of resources, and teachers who don't have a lot of support in terms of their instructional practices, they can at least rely on AI tools,” Liu said. “I think that's just another way to think about equity.” 

Liu was a member of the faculty group that helped launch the campuswide Artificial Intelligence Interdisciplinary Institute at ˻ֱ (AIM) in April, which supports faculty research, learning opportunities and advances in ethical AI.

One of his team’s initial observations is that NLP alone appears to be less helpful to educators than NLP combined with human input. Liu noted that teachers who were given AI-based analyses alone generally found it challenging to interpret the feedback and weren’t particularly motivated to do so. ˻ֱever, when combined with human coaching, NLP seemed to be a very helpful training tool. 

In fact, Liu recently received a grant from the Overdeck Family Foundation to conduct a study that will assess the impact of combining human coaching with automated feedback. 

He is also leading an effort jointly supported with $4.5 million from the Bill & Melinda Gates Foundation, the Walton Family Foundation and the Chan Zuckerberg Initiative to create high-quality benchmark data on math teaching from diverse upper elementary and middle schools—information that is currently lacking, he said. “From a technical perspective, when you develop AI models, the initial data you use to train the models has to be pretty representative, so that your downstream model and applications can be more unbiased,” he added. To address that, Liu is collecting a wealth of information from demographic surveys and other sources about students’ test scores, sense of belonging, perceptions of mathematics and more. The data will be used to train subsequent AI models using the highest-quality—and most equitable—information possible. “So rather than just … developing more AI models, let’s take a step back and create the best data first,” he explained.  

AI-based “virtual students” to enhance teacher training

When Fengfeng Ke joins the ˻ֱ College of Education as a new Clark Leadership Chair in January 2025, she says that one of the questions she wants to explore is: “˻ֱ can we train the teachers … to help the classroom to become more equitable or more inclusive and more personalized in general?” Ke will bring to ˻ֱ a deep background in game-based learning, immersive learning, computer-supported collaborative learning and the inclusive design of e-learning. 

Her research looks at using AI-powered “virtual students” as part of educators’ preservice training protocols, with the goal of creating an accessible, scalable teaching simulation that can augment in-person practicums, she said. A National Science Foundation award of approximately $600,000 supports this work.

The virtual students draw on large language models to help educators practice interacting with a diverse group of students. The technology offers a more sophisticated training modality than traditional role-playing exercises that use fixed decision trees, which are flowchart-like tools that map the outcomes of particular courses of action. 

Ke is particularly interested in helping teachers work more effectively with neurodiverse populations, including students with autism, although she does not yet have access to the dataset needed to evaluate an AI-based intervention. Like Liu, she noted that high-quality data from diverse populations are needed (in her case, data representative of neurodiverse individuals) before further AI models can be developed to benefit that group. “It’s a critical bottleneck right now,” she said. But eventually, she hopes to explore whether simulations enhanced by generative AI can be empowering tools for both neurodiverse students and the teachers, caregivers and healthcare providers who interact with them.

The role of educators and students

As a renowned thought leader on AI in education, Alexander regularly consults with faculty in engineering, medicine, writing and other disciplines, in addition to teaching students. She emphasizes that everyone has a role to play when it comes to ensuring the technology is used responsibly. That begins with understanding that AI should never be used as a substitute for one’s own thinking—which defeats the purpose of learning. For that reason, Alexander asks her students to first master tasks and concepts without using technology.

“But once they have acquired some fundamental skills, augmenting those with AI is very helpful to them,” she added. “We all look up words. We all do things like that all the time, even if we consider ourselves to be proficient.”

In addition, educators have a responsibility to talk about the AI elephant in the classroom by providing thoughtful guidance to students on how to use the technology effectively while acknowledging its shortcomings. That includes reinforcing that AI is only as inclusive as the data it draws upon, which is frequently lacking in that respect. 

Yet, as the critical work being done by ˻ֱ College of Education faculty and students highlights, AI also has tremendous potential to create fairer, more individualized educational experiences. You might even call it an equalizing force.

Illustration by Jeannie Phan