GBEM-UA: Gender Bias Evaluation and Mitigation for Ukrainian Large Language Models
Mykhailo Buleshnyi, Maksym Buleshnyi, Marta Sumyk, Nazarii Drushchak
Abstract
Large Language Models (LLMs) have demonstrated remarkable performance across various domains, but they often inherit biases present in the data they are trained on, leading to unfair or unreliable outcomes—particularly in sensitive areas such as hiring, medical decision-making, and education. This paper evaluates gender bias in LLMs within the Ukrainian language context, where the gendered nature of the language and the use of feminitives introduce additional complexity to bias analysis. We propose a benchmark for measuring bias in Ukrainian and assess several debiasing methods, including prompt debiasing, embedding debiasing, and fine-tuning, to evaluate their effectiveness. Our results suggest that embedding debiasing alone is insufficient for a morphologically rich language like Ukrainian, whereas fine-tuning proves more effective in mitigating bias for domain-specific tasks.- Anthology ID:
- 2025.unlp-1.8
- Volume:
- Proceedings of the Fourth Ukrainian Natural Language Processing Workshop (UNLP 2025)
- Month:
- July
- Year:
- 2025
- Address:
- Vienna, Austria (online)
- Editor:
- Mariana Romanyshyn
- Venues:
- UNLP | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 64–72
- Language:
- URL:
- https://preview.aclanthology.org/acl25-workshop-ingestion/2025.unlp-1.8/
- DOI:
- Cite (ACL):
- Mykhailo Buleshnyi, Maksym Buleshnyi, Marta Sumyk, and Nazarii Drushchak. 2025. GBEM-UA: Gender Bias Evaluation and Mitigation for Ukrainian Large Language Models. In Proceedings of the Fourth Ukrainian Natural Language Processing Workshop (UNLP 2025), pages 64–72, Vienna, Austria (online). Association for Computational Linguistics.
- Cite (Informal):
- GBEM-UA: Gender Bias Evaluation and Mitigation for Ukrainian Large Language Models (Buleshnyi et al., UNLP 2025)
- PDF:
- https://preview.aclanthology.org/acl25-workshop-ingestion/2025.unlp-1.8.pdf