Investigating the Stability of Concrete Nouns in Word Embeddings

Bénédicte Pierrejean, Ludovic Tanguy


Abstract
We know that word embeddings trained using neural-based methods (such as word2vec SGNS) are sensitive to stability problems and that across two models trained using the exact same set of parameters, the nearest neighbors of a word are likely to change. All words are not equally impacted by this internal instability and recent studies have investigated features influencing the stability of word embeddings. This stability can be seen as a clue for the reliability of the semantic representation of a word. In this work, we investigate the influence of the degree of concreteness of nouns on the stability of their semantic representation. We show that for English generic corpora, abstract words are more affected by stability problems than concrete words. We also found that to a certain extent, the difference between the degree of concreteness of a noun and its nearest neighbors can partly explain the stability or instability of its neighbors.
Anthology ID:
W19-0510
Volume:
Proceedings of the 13th International Conference on Computational Semantics - Short Papers
Month:
May
Year:
2019
Address:
Gothenburg, Sweden
Venue:
IWCS
SIG:
SIGSEM
Publisher:
Association for Computational Linguistics
Note:
Pages:
65–70
Language:
URL:
https://aclanthology.org/W19-0510
DOI:
10.18653/v1/W19-0510
Bibkey:
Cite (ACL):
Bénédicte Pierrejean and Ludovic Tanguy. 2019. Investigating the Stability of Concrete Nouns in Word Embeddings. In Proceedings of the 13th International Conference on Computational Semantics - Short Papers, pages 65–70, Gothenburg, Sweden. Association for Computational Linguistics.
Cite (Informal):
Investigating the Stability of Concrete Nouns in Word Embeddings (Pierrejean & Tanguy, IWCS 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/W19-0510.pdf