Evaluating Gender Bias in Dutch NLP: Insights from RobBERT-2023 and the HONEST Framework

Marie Dewulf


Abstract
This study investigates gender bias in the Dutch RobBERT-2023 language model using an adapted version of the HONEST framework, which assesses harmful sentence completions. By translating and expanding HONEST templates to include non-binary and gender-neutral language, we systematically evaluate whether RobBERT-2023 exhibits biased or harmful outputs across gender identities. Our findings reveal that while the model’s overall bias score is relatively low, non-binary identities are disproportionately affected by derogatory language.
Anthology ID:
2025.gitt-1.7
Volume:
Proceedings of the 3rd Workshop on Gender-Inclusive Translation Technologies (GITT 2025)
Month:
June
Year:
2025
Address:
Geneva, Switzerland
Editors:
Janiça Hackenbuchner, Luisa Bentivogli, Joke Daems, Chiara Manna, Beatrice Savoldi, Eva Vanmassenhove
Venue:
GITT
SIG:
Publisher:
European Association for Machine Translation
Note:
Pages:
91–92
Language:
URL:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.gitt-1.7/
DOI:
Bibkey:
Cite (ACL):
Marie Dewulf. 2025. Evaluating Gender Bias in Dutch NLP: Insights from RobBERT-2023 and the HONEST Framework. In Proceedings of the 3rd Workshop on Gender-Inclusive Translation Technologies (GITT 2025), pages 91–92, Geneva, Switzerland. European Association for Machine Translation.
Cite (Informal):
Evaluating Gender Bias in Dutch NLP: Insights from RobBERT-2023 and the HONEST Framework (Dewulf, GITT 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.gitt-1.7.pdf