An Empirical Study on Model-agnostic Debiasing Strategies for Robust Natural Language Inference

Tianyu Liu, Zheng Xin, Xiaoan Ding, Baobao Chang, Zhifang Sui


Abstract
The prior work on natural language inference (NLI) debiasing mainly targets at one or few known biases while not necessarily making the models more robust. In this paper, we focus on the model-agnostic debiasing strategies and explore how to (or is it possible to) make the NLI models robust to multiple distinct adversarial attacks while keeping or even strengthening the models’ generalization power. We firstly benchmark prevailing neural NLI models including pretrained ones on various adversarial datasets. We then try to combat distinct known biases by modifying a mixture of experts (MoE) ensemble method and show that it’s nontrivial to mitigate multiple NLI biases at the same time, and that model-level ensemble method outperforms MoE ensemble method. We also perform data augmentation including text swap, word substitution and paraphrase and prove its efficiency in combating various (though not all) adversarial attacks at the same time. Finally, we investigate several methods to merge heterogeneous training data (1.35M) and perform model ensembling, which are straightforward but effective to strengthen NLI models.
Anthology ID:
2020.conll-1.48
Volume:
Proceedings of the 24th Conference on Computational Natural Language Learning
Month:
November
Year:
2020
Address:
Online
Editors:
Raquel Fernández, Tal Linzen
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
596–608
Language:
URL:
https://aclanthology.org/2020.conll-1.48
DOI:
10.18653/v1/2020.conll-1.48
Bibkey:
Cite (ACL):
Tianyu Liu, Zheng Xin, Xiaoan Ding, Baobao Chang, and Zhifang Sui. 2020. An Empirical Study on Model-agnostic Debiasing Strategies for Robust Natural Language Inference. In Proceedings of the 24th Conference on Computational Natural Language Learning, pages 596–608, Online. Association for Computational Linguistics.
Cite (Informal):
An Empirical Study on Model-agnostic Debiasing Strategies for Robust Natural Language Inference (Liu et al., CoNLL 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2020.conll-1.48.pdf
Code
 tyliupku/nli-debiasing-datasets
Data
ANLIGLUEMultiNLISNLI