Don’t Just Clean It, Proxy Clean It: Mitigating Bias by Proxy in Pre-Trained Models

Swetasudha Panda, Ari Kobren, Michael Wick, Qinlan Shen


Abstract
Transformer-based pre-trained models are known to encode societal biases not only in their contextual representations, but also in downstream predictions when fine-tuned on task-specific data.We present D-Bias, an approach that selectively eliminates stereotypical associations (e.g, co-occurrence statistics) at fine-tuning, such that the model doesn’t learn to excessively rely on those signals.D-Bias attenuates biases from both identity words and frequently co-occurring proxies, which we select using pointwise mutual information.We apply D-Bias to a) occupation classification, and b) toxicity classification and find that our approach substantially reduces downstream biases (e.g. by > 60% in toxicity classification, for identities that are most frequently flagged as toxic on online platforms).In addition, we show that D-Bias dramatically improves upon scrubbing, i.e., removing only the identity words in question.We also demonstrate that D-Bias easily extends to multiple identities, and achieves competitive performance with two recently proposed debiasing approaches: R-LACE and INLP.
Anthology ID:
2022.findings-emnlp.372
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5073–5085
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.372
DOI:
Bibkey:
Cite (ACL):
Swetasudha Panda, Ari Kobren, Michael Wick, and Qinlan Shen. 2022. Don’t Just Clean It, Proxy Clean It: Mitigating Bias by Proxy in Pre-Trained Models. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 5073–5085, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Don’t Just Clean It, Proxy Clean It: Mitigating Bias by Proxy in Pre-Trained Models (Panda et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2022.findings-emnlp.372.pdf