Learning to Perturb Word Embeddings for Out-of-distribution QA

Seanie Lee, Minki Kang, Juho Lee, Sung Ju Hwang


Abstract
QA models based on pretrained language models have achieved remarkable performance on various benchmark datasets. However, QA models do not generalize well to unseen data that falls outside the training distribution, due to distributional shifts. Data augmentation (DA) techniques which drop/replace words have shown to be effective in regularizing the model from overfitting to the training data. Yet, they may adversely affect the QA tasks since they incur semantic changes that may lead to wrong answers for the QA task. To tackle this problem, we propose a simple yet effective DA method based on a stochastic noise generator, which learns to perturb the word embedding of the input questions and context without changing their semantics. We validate the performance of the QA models trained with our word embedding perturbation on a single source dataset, on five different target domains. The results show that our method significantly outperforms the baseline DA methods. Notably, the model trained with ours outperforms the model trained with more than 240K artificially generated QA pairs.
Anthology ID:
2021.acl-long.434
Volume:
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Month:
August
Year:
2021
Address:
Online
Editors:
Chengqing Zong, Fei Xia, Wenjie Li, Roberto Navigli
Venues:
ACL | IJCNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5583–5595
Language:
URL:
https://aclanthology.org/2021.acl-long.434
DOI:
10.18653/v1/2021.acl-long.434
Bibkey:
Cite (ACL):
Seanie Lee, Minki Kang, Juho Lee, and Sung Ju Hwang. 2021. Learning to Perturb Word Embeddings for Out-of-distribution QA. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5583–5595, Online. Association for Computational Linguistics.
Cite (Informal):
Learning to Perturb Word Embeddings for Out-of-distribution QA (Lee et al., ACL-IJCNLP 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-2024-clasp/2021.acl-long.434.pdf
Video:
 https://preview.aclanthology.org/ingest-2024-clasp/2021.acl-long.434.mp4
Code
 seanie12/SWEP
Data
BioASQSQuAD