Lexicosyntactic Inference in Neural Models

Aaron Steven White, Rachel Rudinger, Kyle Rawlins, Benjamin Van Durme


Abstract
We investigate neural models’ ability to capture lexicosyntactic inferences: inferences triggered by the interaction of lexical and syntactic information. We take the task of event factuality prediction as a case study and build a factuality judgment dataset for all English clause-embedding verbs in various syntactic contexts. We use this dataset, which we make publicly available, to probe the behavior of current state-of-the-art neural systems, showing that these systems make certain systematic errors that are clearly visible through the lens of factuality prediction.
Anthology ID:
D18-1501
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
4717–4724
Language:
URL:
https://aclanthology.org/D18-1501
DOI:
10.18653/v1/D18-1501
Bibkey:
Cite (ACL):
Aaron Steven White, Rachel Rudinger, Kyle Rawlins, and Benjamin Van Durme. 2018. Lexicosyntactic Inference in Neural Models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4717–4724, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Lexicosyntactic Inference in Neural Models (White et al., EMNLP 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/D18-1501.pdf
Attachment:
 D18-1501.Attachment.pdf
Video:
 https://preview.aclanthology.org/naacl24-info/D18-1501.mp4
Data
MegaVeridicality