Unmasking Racial & Gender Bias in AI Educational Platforms
As AI tutors like Khanmigo expand to reach 1M+ students in the next year, new research from Stanford & OpenAI shows disturbing patterns in how AI responds to & depicts diverse students. The magnitude of bias is shocking—for example, one AI model portrayed Latine male students as over 1,300 times more likely to be shown as "struggling" than as "star" students.
We saw this bias play out in our first set of workshops in Mexico City. When we asked ChatGPT to create images of a typical Mexican classroom and staff room, the images were so stereotypical that they bordered on being offensive.
In the Stanford study, GenAI models were asked to create stories about learners; they found that the models consistently:
ERASED multiple groups:
- Indigenous/Native American, Native Hawaiian/Pacific Islander students are rarely depicted
- Non-binary & queer students were severely underrepresented
- When minority groups are shown they are often done so in limited stereotypical ways
REINFORCED harmful stereotypes
-Asian students were portrayed as "model minority" STEM experts, erasing individual differences
-"White savior" narratives were common where minority students lacked agency
-Latina students (e.g. "Maria") were almost never shown excelling in STEM
These AI biases can deeply impact students:
- Sense of belonging
- Academic confidence
- Identity development
- Educational outcomes
With AI tools rapidly entering classrooms and increasingly being used by young people in and out of school, we need:
Culturally responsive AI development that takes into account diverse teacher and student perspectives
Additional research and analysis of these impacts
Protection for vulnerable students built into student-facing GenAI tools
Critical examination of AI tools before and after deployment