Teaching Syntax by Adversarial Distraction

Juho Kim, Christopher Malon, Asim Kadav


Abstract
Existing entailment datasets mainly pose problems which can be answered without attention to grammar or word order. Learning syntax requires comparing examples where different grammar and word order change the desired classification. We introduce several datasets based on synthetic transformations of natural entailment examples in SNLI or FEVER, to teach aspects of grammar and word order. We show that without retraining, popular entailment models are unaware that these syntactic differences change meaning. With retraining, some but not all popular entailment models can learn to compare the syntax properly.
Anthology ID:
W18-5512
Volume:
Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)
Month:
November
Year:
2018
Address:
Brussels, Belgium
Editors:
James Thorne, Andreas Vlachos, Oana Cocarascu, Christos Christodoulopoulos, Arpit Mittal
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
79–84
Language:
URL:
https://aclanthology.org/W18-5512
DOI:
10.18653/v1/W18-5512
Bibkey:
Cite (ACL):
Juho Kim, Christopher Malon, and Asim Kadav. 2018. Teaching Syntax by Adversarial Distraction. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 79–84, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Teaching Syntax by Adversarial Distraction (Kim et al., EMNLP 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp22-frontmatter/W18-5512.pdf
Data
FEVERMultiNLISNLI