Empirical Study of Diachronic Word Embeddings for Scarce Data

Syrielle Montariol, Alexandre Allauzen


Abstract
Word meaning change can be inferred from drifts of time-varying word embeddings. However, temporal data may be too sparse to build robust word embeddings and to discriminate significant drifts from noise. In this paper, we compare three models to learn diachronic word embeddings on scarce data: incremental updating of a Skip-Gram from Kim et al. (2014), dynamic filtering from Bamler & Mandt (2017), and dynamic Bernoulli embeddings from Rudolph & Blei (2018). In particular, we study the performance of different initialisation schemes and emphasise what characteristics of each model are more suitable to data scarcity, relying on the distribution of detected drifts. Finally, we regularise the loss of these models to better adapt to scarce data.
Anthology ID:
R19-1092
Volume:
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019)
Month:
September
Year:
2019
Address:
Varna, Bulgaria
Editors:
Ruslan Mitkov, Galia Angelova
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd.
Note:
Pages:
795–803
Language:
URL:
https://aclanthology.org/R19-1092
DOI:
10.26615/978-954-452-056-4_092
Bibkey:
Cite (ACL):
Syrielle Montariol and Alexandre Allauzen. 2019. Empirical Study of Diachronic Word Embeddings for Scarce Data. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019), pages 795–803, Varna, Bulgaria. INCOMA Ltd..
Cite (Informal):
Empirical Study of Diachronic Word Embeddings for Scarce Data (Montariol & Allauzen, RANLP 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/R19-1092.pdf
Data
New York Times Annotated Corpus