Are Word Embeddings Really a Bad Fit for the Estimation of Thematic Fit?

Emmanuele Chersoni, Ludovica Pannitto, Enrico Santus, Alessandro Lenci, Chu-Ren Huang


Abstract
While neural embeddings represent a popular choice for word representation in a wide variety of NLP tasks, their usage for thematic fit modeling has been limited, as they have been reported to lag behind syntax-based count models. In this paper, we propose a complete evaluation of count models and word embeddings on thematic fit estimation, by taking into account a larger number of parameters and verb roles and introducing also dependency-based embeddings in the comparison. Our results show a complex scenario, where a determinant factor for the performance seems to be the availability to the model of reliable syntactic information for building the distributional representations of the roles.
Anthology ID:
2020.lrec-1.700
Volume:
Proceedings of the Twelfth Language Resources and Evaluation Conference
Month:
May
Year:
2020
Address:
Marseille, France
Editors:
Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Asuncion Moreno, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
5708–5713
Language:
English
URL:
https://aclanthology.org/2020.lrec-1.700
DOI:
Bibkey:
Cite (ACL):
Emmanuele Chersoni, Ludovica Pannitto, Enrico Santus, Alessandro Lenci, and Chu-Ren Huang. 2020. Are Word Embeddings Really a Bad Fit for the Estimation of Thematic Fit?. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 5708–5713, Marseille, France. European Language Resources Association.
Cite (Informal):
Are Word Embeddings Really a Bad Fit for the Estimation of Thematic Fit? (Chersoni et al., LREC 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/2020.lrec-1.700.pdf