Language Modeling with Syntactic and Semantic Representation for Sentence Acceptability Predictions

Adam Ek, Jean-Philippe Bernardy, Shalom Lappin

[How to correct problems with metadata yourself]


Abstract
In this paper, we investigate the effect of enhancing lexical embeddings in LSTM language models (LM) with syntactic and semantic representations. We evaluate the language models using perplexity, and we evaluate the performance of the models on the task of predicting human sentence acceptability judgments. We train LSTM language models on sentences automatically annotated with universal syntactic dependency roles (Nivre, 2016), dependency depth and universal semantic tags (Abzianidze et al., 2017) to predict sentence acceptability judgments. Our experiments indicate that syntactic tags lower perplexity, while semantic tags increase it. Our experiments also show that neither syntactic nor semantic tags improve the performance of LSTM language models on the task of predicting sentence acceptability judgments.
Anthology ID:
W19-6108
Volume:
Proceedings of the 22nd Nordic Conference on Computational Linguistics
Month:
September–October
Year:
2019
Address:
Turku, Finland
Editors:
Mareike Hartmann, Barbara Plank
Venue:
NoDaLiDa
SIG:
Publisher:
Linköping University Electronic Press
Note:
Pages:
76–85
Language:
URL:
https://aclanthology.org/W19-6108
DOI:
Bibkey:
Cite (ACL):
Adam Ek, Jean-Philippe Bernardy, and Shalom Lappin. 2019. Language Modeling with Syntactic and Semantic Representation for Sentence Acceptability Predictions. In Proceedings of the 22nd Nordic Conference on Computational Linguistics, pages 76–85, Turku, Finland. Linköping University Electronic Press.
Cite (Informal):
Language Modeling with Syntactic and Semantic Representation for Sentence Acceptability Predictions (Ek et al., NoDaLiDa 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/teach-a-man-to-fish/W19-6108.pdf
Code
 gu-clasp/predicting-acceptability
Data
Universal Dependencies