Abstract
Previous work has examined how debiasing language models affect downstream tasks, specifically, how debiasing techniques influence task performance and whether debiased models also make impartial predictions in downstream tasks or not. However, what we don’t understand well yet is why debiasing methods have varying impacts on downstream tasks and how debiasing techniques affect internal components of language models, i.e., neurons, layers, and attentions. In this paper, we decompose the internal mechanisms of debiasing language models with respect to gender by applying causal mediation analysis to understand the influence of debiasing methods on toxicity detection as a downstream task. Our findings suggest a need to test the effectiveness of debiasing methods with different bias metrics, and to focus on changes in the behavior of certain components of the models, e.g.,first two layers of language models, and attention heads.- Anthology ID:
- 2022.gebnlp-1.26
- Volume:
- Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP)
- Month:
- July
- Year:
- 2022
- Address:
- Seattle, Washington
- Editors:
- Christian Hardmeier, Christine Basta, Marta R. Costa-jussà, Gabriel Stanovsky, Hila Gonen
- Venue:
- GeBNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 255–265
- Language:
- URL:
- https://aclanthology.org/2022.gebnlp-1.26
- DOI:
- 10.18653/v1/2022.gebnlp-1.26
- Cite (ACL):
- Sullam Jeoung and Jana Diesner. 2022. What changed? Investigating Debiasing Methods using Causal Mediation Analysis. In Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP), pages 255–265, Seattle, Washington. Association for Computational Linguistics.
- Cite (Informal):
- What changed? Investigating Debiasing Methods using Causal Mediation Analysis (Jeoung & Diesner, GeBNLP 2022)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-5/2022.gebnlp-1.26.pdf
- Data
- WikiText-2, WinoBias