Social Bias in Multilingual Language Models: A Survey

Lance Calvin Lim Gamboa, Yue Feng, Mark G. Lee


Abstract
Pretrained multilingual models exhibit the same social bias as models processing English texts. This systematic review analyzes emerging research that extends bias evaluation and mitigation approaches into multilingual and non-English contexts. We examine these studies with respect to linguistic diversity, cultural awareness, and their choice of evaluation metrics and mitigation techniques. Our survey illuminates gaps in the field’s dominant methodological design choices (e.g., preference for certain languages, scarcity of multilingual mitigation experiments) while cataloging common issues encountered and solutions implemented in adapting bias benchmarks across languages and cultures. Drawing from the implications of our findings, we chart directions for future research that can reinforce the multilingual bias literature’s inclusivity, cross-cultural appropriateness, and alignment with state-of-the-art NLP advancements.
Anthology ID:
2025.emnlp-main.1416
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
27845–27868
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1416/
DOI:
Bibkey:
Cite (ACL):
Lance Calvin Lim Gamboa, Yue Feng, and Mark G. Lee. 2025. Social Bias in Multilingual Language Models: A Survey. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 27845–27868, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Social Bias in Multilingual Language Models: A Survey (Gamboa et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1416.pdf
Checklist:
 2025.emnlp-main.1416.checklist.pdf