Evaluating Neural Model Robustness for Machine Comprehension

Winston Wu, Dustin Arendt, Svitlana Volkova


Abstract
We evaluate neural model robustness to adversarial attacks using different types of linguistic unit perturbations – character and word, and propose a new method for strategic sentence-level perturbations. We experiment with different amounts of perturbations to examine model confidence and misclassification rate, and contrast model performance with different embeddings BERT and ELMo on two benchmark datasets SQuAD and TriviaQA. We demonstrate how to improve model performance during an adversarial attack by using ensembles. Finally, we analyze factors that effect model behavior under adversarial attack, and develop a new model to predict errors during attacks. Our novel findings reveal that (a) unlike BERT, models that use ELMo embeddings are more susceptible to adversarial attacks, (b) unlike word and paraphrase, character perturbations affect the model the most but are most easily compensated for by adversarial training, (c) word perturbations lead to more high-confidence misclassifications compared to sentence- and character-level perturbations, (d) the type of question and model answer length (the longer the answer the more likely it is to be incorrect) is the most predictive of model errors in adversarial setting, and (e) conclusions about model behavior are dataset-specific.
Anthology ID:
2021.eacl-main.210
Volume:
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Month:
April
Year:
2021
Address:
Online
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2470–2481
Language:
URL:
https://aclanthology.org/2021.eacl-main.210
DOI:
10.18653/v1/2021.eacl-main.210
Bibkey:
Cite (ACL):
Winston Wu, Dustin Arendt, and Svitlana Volkova. 2021. Evaluating Neural Model Robustness for Machine Comprehension. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2470–2481, Online. Association for Computational Linguistics.
Cite (Informal):
Evaluating Neural Model Robustness for Machine Comprehension (Wu et al., EACL 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2021.eacl-main.210.pdf
Data
SQuADTriviaQA