Abstract
Modern Natural Language Processing (NLP) models are known to be sensitive to input perturbations and their performance can decrease when applied to real-world, noisy data. However, it is still unclear why models are less robust to some perturbations than others. In this work, we test the hypothesis that the extent to which a model is affected by an unseen textual perturbation (robustness) can be explained by the learnability of the perturbation (defined as how well the model learns to identify the perturbation with a small amount of evidence). We further give a causal justification for the learnability metric. We conduct extensive experiments with four prominent NLP models — TextRNN, BERT, RoBERTa and XLNet — over eight types of textual perturbations on three datasets. We show that a model which is better at identifying a perturbation (higher learnability) becomes worse at ignoring such a perturbation at test time (lower robustness), providing empirical support for our hypothesis.- Anthology ID:
- 2022.findings-acl.315
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2022
- Month:
- May
- Year:
- 2022
- Address:
- Dublin, Ireland
- Editors:
- Smaranda Muresan, Preslav Nakov, Aline Villavicencio
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 3993–4007
- Language:
- URL:
- https://aclanthology.org/2022.findings-acl.315
- DOI:
- 10.18653/v1/2022.findings-acl.315
- Cite (ACL):
- Yunxiang Zhang, Liangming Pan, Samson Tan, and Min-Yen Kan. 2022. Interpreting the Robustness of Neural NLP Models to Textual Perturbations. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3993–4007, Dublin, Ireland. Association for Computational Linguistics.
- Cite (Informal):
- Interpreting the Robustness of Neural NLP Models to Textual Perturbations (Zhang et al., Findings 2022)
- PDF:
- https://preview.aclanthology.org/proper-vol2-ingestion/2022.findings-acl.315.pdf