Generating Data to Mitigate Spurious Correlations in Natural Language Inference Datasets

Yuxiang Wu, Matt Gardner, Pontus Stenetorp, Pradeep Dasigi


Abstract
Natural language processing models often exploit spurious correlations between task-independent features and labels in datasets to perform well only within the distributions they are trained on, while not generalising to different task distributions. We propose to tackle this problem by generating a debiased version of a dataset, which can then be used to train a debiased, off-the-shelf model, by simply replacing its training data. Our approach consists of 1) a method for training data generators to generate high-quality, label-consistent data samples; and 2) a filtering mechanism for removing data points that contribute to spurious correlations, measured in terms of z-statistics. We generate debiased versions of the SNLI and MNLI datasets, and we evaluate on a large suite of debiased, out-of-distribution, and adversarial test sets. Results show that models trained on our debiased datasets generalise better than those trained on the original datasets in all settings. On the majority of the datasets, our method outperforms or performs comparably to previous state-of-the-art debiasing strategies, and when combined with an orthogonal technique, product-of-experts, it improves further and outperforms previous best results of SNLI-hard and MNLI-hard.
Anthology ID:
2022.acl-long.190
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2660–2676
Language:
URL:
https://aclanthology.org/2022.acl-long.190
DOI:
10.18653/v1/2022.acl-long.190
Bibkey:
Cite (ACL):
Yuxiang Wu, Matt Gardner, Pontus Stenetorp, and Pradeep Dasigi. 2022. Generating Data to Mitigate Spurious Correlations in Natural Language Inference Datasets. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2660–2676, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Generating Data to Mitigate Spurious Correlations in Natural Language Inference Datasets (Wu et al., ACL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2022.acl-long.190.pdf
Video:
 https://preview.aclanthology.org/ingestion-script-update/2022.acl-long.190.mp4
Code
 jimmycode/gen-debiased-nli
Data
GD-NLIHANSMultiNLISNLI