Measuring Gender Bias in Language Models in Farsi

Hamidreza Saffari, Mohammadamin Shafiei, Donya Rooein, Debora Nozza


Abstract
As Natural Language Processing models become increasingly embedded in everyday life, ensuring that these systems can measure and mitigate bias is critical. While substantial work has been done to identify and mitigate gender bias in English, Farsi remains largely underexplored. This paper presents the first comprehensive study of gender bias in language models in Farsi across three tasks: emotion analysis, question answering, and hurtful sentence completion. We assess a range of language models across all the tasks in zero-shot settings. By adapting established evaluation frameworks for Farsi, we uncover patterns of gender bias that differ from those observed in English, highlighting the urgent need for culturally and linguistically inclusive approaches to bias mitigation in NLP.
Anthology ID:
2025.gebnlp-1.21
Volume:
Proceedings of the 6th Workshop on Gender Bias in Natural Language Processing (GeBNLP)
Month:
August
Year:
2025
Address:
Vienna, Austria
Editors:
Agnieszka Faleńska, Christine Basta, Marta Costa-jussà, Karolina Stańczak, Debora Nozza
Venues:
GeBNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
228–241
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.gebnlp-1.21/
DOI:
10.18653/v1/2025.gebnlp-1.21
Bibkey:
Cite (ACL):
Hamidreza Saffari, Mohammadamin Shafiei, Donya Rooein, and Debora Nozza. 2025. Measuring Gender Bias in Language Models in Farsi. In Proceedings of the 6th Workshop on Gender Bias in Natural Language Processing (GeBNLP), pages 228–241, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Measuring Gender Bias in Language Models in Farsi (Saffari et al., GeBNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.gebnlp-1.21.pdf