Distributionally Robust Finetuning BERT for Covariate Drift in Spoken Language Understanding

Samuel Broscheit, Quynh Do, Judith Gaspers


Abstract
In this study, we investigate robustness against covariate drift in spoken language understanding (SLU). Covariate drift can occur in SLUwhen there is a drift between training and testing regarding what users request or how they request it. To study this we propose a method that exploits natural variations in data to create a covariate drift in SLU datasets. Experiments show that a state-of-the-art BERT-based model suffers performance loss under this drift. To mitigate the performance loss, we investigate distributionally robust optimization (DRO) for finetuning BERT-based models. We discuss some recent DRO methods, propose two new variants and empirically show that DRO improves robustness under drift.
Anthology ID:
2022.acl-long.139
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1970–1985
Language:
URL:
https://aclanthology.org/2022.acl-long.139
DOI:
10.18653/v1/2022.acl-long.139
Bibkey:
Cite (ACL):
Samuel Broscheit, Quynh Do, and Judith Gaspers. 2022. Distributionally Robust Finetuning BERT for Covariate Drift in Spoken Language Understanding. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1970–1985, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Distributionally Robust Finetuning BERT for Covariate Drift in Spoken Language Understanding (Broscheit et al., ACL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/improve-issue-templates/2022.acl-long.139.pdf
Video:
 https://preview.aclanthology.org/improve-issue-templates/2022.acl-long.139.mp4