An Artificial Language Evaluation of Distributional Semantic Models

Fatemeh Torabi Asr, Michael Jones


Abstract
Recent studies of distributional semantic models have set up a competition between word embeddings obtained from predictive neural networks and word vectors obtained from abstractive count-based models. This paper is an attempt to reveal the underlying contribution of additional training data and post-processing steps on each type of model in word similarity and relatedness inference tasks. We do so by designing an artificial language framework, training a predictive and a count-based model on data sampled from this grammar, and evaluating the resulting word vectors in paradigmatic and syntagmatic tasks defined with respect to the grammar.
Anthology ID:
K17-1015
Volume:
Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017)
Month:
August
Year:
2017
Address:
Vancouver, Canada
Editors:
Roger Levy, Lucia Specia
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
134–142
Language:
URL:
https://aclanthology.org/K17-1015
DOI:
10.18653/v1/K17-1015
Bibkey:
Cite (ACL):
Fatemeh Torabi Asr and Michael Jones. 2017. An Artificial Language Evaluation of Distributional Semantic Models. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 134–142, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
An Artificial Language Evaluation of Distributional Semantic Models (Torabi Asr & Jones, CoNLL 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-2024-clasp/K17-1015.pdf