Sathvika Anand
2025
LLMs are Biased Teachers: Evaluating LLM Bias in Personalized Education
Iain Weissburg
|
Sathvika Anand
|
Sharon Levy
|
Haewon Jeong
Findings of the Association for Computational Linguistics: NAACL 2025
With the increasing adoption of large language models (LLMs) in education, concerns about inherent biases in these models have gained prominence. We evaluate LLMs for bias in the personalized educational setting, specifically focusing on the models’ roles as “teachers.” We reveal significant biases in how models generate and select educational content tailored to different demographic groups, including race, ethnicity, sex, gender, disability status, income, and national origin. We introduce and apply two bias score metrics—Mean Absolute Bias (MAB) and Maximum Difference Bias (MDB)—to analyze 9 open and closed state-of-the-art LLMs. Our experiments, which utilize over 17,000 educational explanations across multiple difficulty levels and topics, uncover that models potentially harm student learning by both perpetuating harmful stereotypes and reversing them. We find that bias is similar for all frontier models, with the highest MAB along income levels while MDB is highest relative to both income and disability status. For both metrics, we find the lowest bias exists for sex/gender and race/ethnicity.
2024
Cheap Talk: Topic Analysis of CSR Themes on Corporate Twitter
Nile Phillips
|
Sathvika Anand
|
Michelle Lum
|
Manisha Goel
|
Michelle Zemel
|
Alexandra Schofield
Proceedings of the Joint Workshop of the 7th Financial Technology and Natural Language Processing, the 5th Knowledge Discovery from Unstructured Data in Financial Services, and the 4th Workshop on Economics and Natural Language Processing
Numerous firms advertise action around corporate social responsibility (CSR) on social media. Using a Twitter corpus from S&P 500 companies and topic modeling, we investigate how companies talk about their social and sustainability efforts and whether CSR-related speech predicts Environmental, Social, and Governance (ESG) risk scores. As part of our work in progress, we present early findings suggesting a possible distinction in language between authentic discussion of positive practices and corporate posturing.
Search
Fix data
Co-authors
- Manisha Goel 1
- Haewon Jeong 1
- Sharon Levy 1
- Michelle Lum 1
- Nile Phillips 1
- show all...