Do Neural Models Learn Systematicity of Monotonicity Inference in Natural Language?

Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui


Abstract
Despite the success of language models using neural networks, it remains unclear to what extent neural models have the generalization ability to perform inferences. In this paper, we introduce a method for evaluating whether neural models can learn systematicity of monotonicity inference in natural language, namely, the regularity for performing arbitrary inferences with generalization on composition. We consider four aspects of monotonicity inferences and test whether the models can systematically interpret lexical and logical phenomena on different training/test splits. A series of experiments show that three neural models systematically draw inferences on unseen combinations of lexical and logical phenomena when the syntactic structures of the sentences are similar between the training and test sets. However, the performance of the models significantly decreases when the structures are slightly changed in the test set while retaining all vocabularies and constituents already appearing in the training set. This indicates that the generalization ability of neural models is limited to cases where the syntactic structures are nearly the same as those in the training set.
Anthology ID:
2020.acl-main.543
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6105–6117
Language:
URL:
https://aclanthology.org/2020.acl-main.543
DOI:
10.18653/v1/2020.acl-main.543
Bibkey:
Cite (ACL):
Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, and Kentaro Inui. 2020. Do Neural Models Learn Systematicity of Monotonicity Inference in Natural Language?. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6105–6117, Online. Association for Computational Linguistics.
Cite (Informal):
Do Neural Models Learn Systematicity of Monotonicity Inference in Natural Language? (Yanaka et al., ACL 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/2020.acl-main.543.pdf
Video:
 http://slideslive.com/38928821
Code
 verypluming/systematicity
Data
MultiNLI