Marie Dewulf


Fixing paper assignments

  1. Please select all papers that belong to the same person.
  2. Indicate below which author they should be assigned to.
Provide a valid ORCID iD here. This will be used to match future papers to this author.
Provide the name of the school or the university where the author has received or will receive their highest degree (e.g., Ph.D. institution for researchers, or current affiliation for students). This will be used to form the new author page ID, if needed.

TODO: "submit" and "cancel" buttons here


2025

pdf bib
Evaluating Gender Bias in Dutch NLP: Insights from RobBERT-2023 and the HONEST Framework
Marie Dewulf
Proceedings of the 3rd Workshop on Gender-Inclusive Translation Technologies (GITT 2025)

This study investigates gender bias in the Dutch RobBERT-2023 language model using an adapted version of the HONEST framework, which assesses harmful sentence completions. By translating and expanding HONEST templates to include non-binary and gender-neutral language, we systematically evaluate whether RobBERT-2023 exhibits biased or harmful outputs across gender identities. Our findings reveal that while the model’s overall bias score is relatively low, non-binary identities are disproportionately affected by derogatory language.
Search
Co-authors
    Venues
    Fix data