Robust Natural Language Understanding with Residual Attention Debiasing
Fei Wang, James Y. Huang, Tianyi Yan, Wenxuan Zhou, Muhao Chen
Abstract
Natural language understanding (NLU) models often suffer from unintended dataset biases. Among bias mitigation methods, ensemble-based debiasing methods, especially product-of-experts (PoE), have stood out for their impressive empirical success. However, previous ensemble-based debiasing methods typically apply debiasing on top-level logits without directly addressing biased attention patterns. Attention serves as the main media of feature interaction and aggregation in PLMs and plays a crucial role in providing robust prediction. In this paper, we propose REsidual Attention Debiasing (READ), an end-to-end debiasing method that mitigates unintended biases from attention. Experiments on three NLU benchmarks show that READ significantly improves the OOD performance of BERT-based models, including +12.9% accuracy on HANS, +11.0% accuracy on FEVER-Symmetric, and +2.7% F1 on PAWS. Detailed analyses demonstrate the crucial role of unbiased attention in robust NLU models and that READ effectively mitigates biases in attention.- Anthology ID:
- 2023.findings-acl.32
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2023
- Month:
- July
- Year:
- 2023
- Address:
- Toronto, Canada
- Editors:
- Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 504–519
- Language:
- URL:
- https://preview.aclanthology.org/add_missing_videos/2023.findings-acl.32/
- DOI:
- 10.18653/v1/2023.findings-acl.32
- Cite (ACL):
- Fei Wang, James Y. Huang, Tianyi Yan, Wenxuan Zhou, and Muhao Chen. 2023. Robust Natural Language Understanding with Residual Attention Debiasing. In Findings of the Association for Computational Linguistics: ACL 2023, pages 504–519, Toronto, Canada. Association for Computational Linguistics.
- Cite (Informal):
- Robust Natural Language Understanding with Residual Attention Debiasing (Wang et al., Findings 2023)
- PDF:
- https://preview.aclanthology.org/add_missing_videos/2023.findings-acl.32.pdf