Abstract
The influence of Large Language Models (LLMs) is rapidly growing, automating more jobs over time. Assessing the fairness of LLMs is crucial due to their expanding impact. Studies reveal the reflection of societal norms and biases in LLMs, which creates a risk of propagating societal stereotypes in downstream tasks. Many studies on bias in LLMs focus on gender bias in various NLP applications. However, there’s a gap in research on bias in emotional attributes, despite the close societal link between emotion and gender. This gap is even larger for low-resource languages like Bangla. Historically, women are associated with emotions like empathy, fear, and guilt, while men are linked to anger, bravado, and authority. This pattern reflects societal norms in Bangla-speaking regions. We offer the first thorough investigation of gendered emotion attribution in Bangla for both closed and open source LLMs in this work. Our aim is to elucidate the intricate societal relationship between gender and emotion specifically within the context of Bangla. We have been successful in showing the existence of gender bias in the context of emotions in Bangla through analytical methods and also show how emotion attribution changes on the basis of gendered role selection in LLMs. All of our resources including code and data are made publicly available to support future research on Bangla NLP. Warning: This paper contains explicit stereotypical statements that many may find offensive.- Anthology ID:
- 2024.gebnlp-1.25
- Volume:
- Proceedings of the 5th Workshop on Gender Bias in Natural Language Processing (GeBNLP)
- Month:
- August
- Year:
- 2024
- Address:
- Bangkok, Thailand
- Editors:
- Agnieszka Faleńska, Christine Basta, Marta Costa-jussà, Seraphina Goldfarb-Tarrant, Debora Nozza
- Venues:
- GeBNLP | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 384–398
- Language:
- URL:
- https://aclanthology.org/2024.gebnlp-1.25
- DOI:
- Cite (ACL):
- Jayanta Sadhu, Maneesha Saha, and Rifat Shahriyar. 2024. An Empirical Study of Gendered Stereotypes in Emotional Attributes for Bangla in Multilingual Large Language Models. In Proceedings of the 5th Workshop on Gender Bias in Natural Language Processing (GeBNLP), pages 384–398, Bangkok, Thailand. Association for Computational Linguistics.
- Cite (Informal):
- An Empirical Study of Gendered Stereotypes in Emotional Attributes for Bangla in Multilingual Large Language Models (Sadhu et al., GeBNLP-WS 2024)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2024.gebnlp-1.25.pdf