Analyzing Dynamic Adversarial Training Data in the Limit

Eric Wallace, Adina Williams, Robin Jia, Douwe Kiela


Abstract
To create models that are robust across a wide range of test inputs, training datasets should include diverse examples that span numerous phenomena. Dynamic adversarial data collection (DADC), where annotators craft examples that challenge continually improving models, holds promise as an approach for generating such diverse training sets. Prior work has shown that running DADC over 1-3 rounds can help models fix some error types, but it does not necessarily lead to better generalization beyond adversarial test data. We argue that running DADC over many rounds maximizes its training-time benefits, as the different rounds can together cover many of the task-relevant phenomena. We present the first study of longer-term DADC, where we collect 20 rounds of NLI examples for a small set of premise paragraphs, with both adversarial and non-adversarial approaches. Models trained on DADC examples make 26% fewer errors on our expert-curated test set compared to models trained on non-adversarial data. Our analysis shows that DADC yields examples that are more difficult, more lexically and syntactically diverse, and contain fewer annotation artifacts compared to non-adversarial examples.
Anthology ID:
2022.findings-acl.18
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
202–217
Language:
URL:
https://aclanthology.org/2022.findings-acl.18
DOI:
10.18653/v1/2022.findings-acl.18
Bibkey:
Cite (ACL):
Eric Wallace, Adina Williams, Robin Jia, and Douwe Kiela. 2022. Analyzing Dynamic Adversarial Training Data in the Limit. In Findings of the Association for Computational Linguistics: ACL 2022, pages 202–217, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Analyzing Dynamic Adversarial Training Data in the Limit (Wallace et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2022.findings-acl.18.pdf
Video:
 https://preview.aclanthology.org/emnlp-22-attachments/2022.findings-acl.18.mp4
Code
 facebookresearch/dadc-limit
Data
MultiNLISNLI