Marie Dewulf


2025

pdf bib
Evaluating Gender Bias in Dutch NLP: Insights from RobBERT-2023 and the HONEST Framework
Marie Dewulf
Proceedings of the 3rd Workshop on Gender-Inclusive Translation Technologies (GITT 2025)

This study investigates gender bias in the Dutch RobBERT-2023 language model using an adapted version of the HONEST framework, which assesses harmful sentence completions. By translating and expanding HONEST templates to include non-binary and gender-neutral language, we systematically evaluate whether RobBERT-2023 exhibits biased or harmful outputs across gender identities. Our findings reveal that while the model’s overall bias score is relatively low, non-binary identities are disproportionately affected by derogatory language.
Search
Co-authors
    Venues
    Fix author