@inproceedings{kalhor-bahrak-2025-probing,
    title = "Probing Gender Bias in Multilingual {LLM}s: A Case Study of Stereotypes in {P}ersian",
    author = "Kalhor, Ghazal  and
      Bahrak, Behnam",
    editor = "Zhang, Chen  and
      Allaway, Emily  and
      Shen, Hua  and
      Miculicich, Lesly  and
      Li, Yinqiao  and
      M'hamdi, Meryem  and
      Limkonchotiwat, Peerat  and
      Bai, Richard He  and
      T.y.s.s., Santosh  and
      Han, Sophia Simeng  and
      Thapa, Surendrabikram  and
      Rim, Wiem Ben",
    booktitle = "Proceedings of the 9th Widening NLP Workshop",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.winlp-main.3/",
    pages = "19--27",
    ISBN = "979-8-89176-351-7",
    abstract = "Multilingual Large Language Models (LLMs) are increasingly used worldwide, making it essential to ensure they are free from gender bias to prevent representational harm. While prior studies have examined such biases in high-resource languages, low-resource languages remain understudied. In this paper, we propose a template-based probing methodology, validated against real-world data, to uncover gender stereotypes in LLMs. As part of this framework, we introduce the Domain-Specific Gender Skew Index (DS-GSI), a metric that quantifies deviations from gender parity. We evaluate four prominent models, GPT-4o mini, DeepSeek R1, Gemini 2.0 Flash, and Qwen QwQ 32B, across four semantic domains, focusing on Persian, a low-resource language with distinct linguistic features. Our results show that all models exhibit gender stereotypes, with greater disparities in Persian than in English across all domains. Among these, sports reflect the most rigid gender biases. This study underscores the need for inclusive NLP practices and provides a framework for assessing bias in other low-resource languages."
}