Bénédicte Pierrejean


2019

pdf bib
Investigating the Stability of Concrete Nouns in Word Embeddings
Bénédicte Pierrejean | Ludovic Tanguy
Proceedings of the 13th International Conference on Computational Semantics - Short Papers

We know that word embeddings trained using neural-based methods (such as word2vec SGNS) are sensitive to stability problems and that across two models trained using the exact same set of parameters, the nearest neighbors of a word are likely to change. All words are not equally impacted by this internal instability and recent studies have investigated features influencing the stability of word embeddings. This stability can be seen as a clue for the reliability of the semantic representation of a word. In this work, we investigate the influence of the degree of concreteness of nouns on the stability of their semantic representation. We show that for English generic corpora, abstract words are more affected by stability problems than concrete words. We also found that to a certain extent, the difference between the degree of concreteness of a noun and its nearest neighbors can partly explain the stability or instability of its neighbors.

pdf bib
Toward a Computational Multidimensional Lexical Similarity Measure for Modeling Word Association Tasks in Psycholinguistics
Bruno Gaume | Lydia Mai Ho-Dac | Ludovic Tanguy | Cécile Fabre | Bénédicte Pierrejean | Nabil Hathout | Jérôme Farinas | Julien Pinquier | Lola Danet | Patrice Péran | Xavier De Boissezon | Mélanie Jucla
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics

This paper presents the first results of a multidisciplinary project, the “Evolex” project, gathering researchers in Psycholinguistics, Neuropsychology, Computer Science, Natural Language Processing and Linguistics. The Evolex project aims at proposing a new data-based inductive method for automatically characterising the relation between pairs of french words collected in psycholinguistics experiments on lexical access. This method takes advantage of several complementary computational measures of semantic similarity. We show that some measures are more correlated than others with the frequency of lexical associations, and that they also differ in the way they capture different semantic relations. This allows us to consider building a multidimensional lexical similarity to automate the classification of lexical associations.

2018

pdf bib
Etude de la reproductibilité des word embeddings : repérage des zones stables et instables dans le lexique (Reproducibility of word embeddings : identifying stable and unstable zones in the semantic space)
Bénédicte Pierrejean | Ludovic Tanguy
Actes de la Conférence TALN. Volume 1 - Articles longs, articles courts de TALN

Les modèles vectoriels de sémantique distributionnelle (ou word embeddings), notamment ceux produits par les méthodes neuronales, posent des questions de reproductibilité et donnent des représentations différentes à chaque utilisation, même sans modifier leurs paramètres. Nous présentons ici un ensemble d’expérimentations permettant de mesurer cette instabilité, à la fois globalement et localement. Globalement, nous avons mesuré le taux de variation du voisinage des mots sur trois corpus différents, qui est estimé autour de 17% pour les 25 plus proches voisins d’un mot. Localement, nous avons identifié et caractérisé certaines zones de l’espace sémantique qui montrent une relative stabilité, ainsi que des cas de grande instabilité.

pdf bib
Predicting Word Embeddings Variability
Bénédicte Pierrejean | Ludovic Tanguy
Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics

Neural word embeddings models (such as those built with word2vec) are known to have stability problems: when retraining a model with the exact same hyperparameters, words neighborhoods may change. We propose a method to estimate such variation, based on the overlap of neighbors of a given word in two models trained with identical hyperparameters. We show that this inherent variation is not negligible, and that it does not affect every word in the same way. We examine the influence of several features that are intrinsic to a word, corpus or embedding model and provide a methodology that can predict the variability (and as such, reliability) of a word representation in a semantic vector space.

pdf bib
Towards Qualitative Word Embeddings Evaluation: Measuring Neighbors Variation
Bénédicte Pierrejean | Ludovic Tanguy
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop

We propose a method to study the variation lying between different word embeddings models trained with different parameters. We explore the variation between models trained with only one varying parameter by observing the distributional neighbors variation and show how changing only one parameter can have a massive impact on a given semantic space. We show that the variation is not affecting all words of the semantic space equally. Variation is influenced by parameters such as setting a parameter to its minimum or maximum value but it also depends on the corpus intrinsic features such as the frequency of a word. We identify semantic classes of words remaining stable across the models trained and specific words having high variation.