Exploring the Value of Personalized Word Embeddings
Charles Welch, Jonathan K. Kummerfeld, Verónica Pérez-Rosas, Rada Mihalcea
Abstract
In this paper, we introduce personalized word embeddings, and examine their value for language modeling. We compare the performance of our proposed prediction model when using personalized versus generic word representations, and study how these representations can be leveraged for improved performance. We provide insight into what types of words can be more accurately predicted when building personalized models. Our results show that a subset of words belonging to specific psycholinguistic categories tend to vary more in their representations across users and that combining generic and personalized word embeddings yields the best performance, with a 4.7% relative reduction in perplexity. Additionally, we show that a language model using personalized word embeddings can be effectively used for authorship attribution.- Anthology ID:
- 2020.coling-main.604
- Volume:
- Proceedings of the 28th International Conference on Computational Linguistics
- Month:
- December
- Year:
- 2020
- Address:
- Barcelona, Spain (Online)
- Editors:
- Donia Scott, Nuria Bel, Chengqing Zong
- Venue:
- COLING
- SIG:
- Publisher:
- International Committee on Computational Linguistics
- Note:
- Pages:
- 6856–6862
- Language:
- URL:
- https://aclanthology.org/2020.coling-main.604
- DOI:
- 10.18653/v1/2020.coling-main.604
- Cite (ACL):
- Charles Welch, Jonathan K. Kummerfeld, Verónica Pérez-Rosas, and Rada Mihalcea. 2020. Exploring the Value of Personalized Word Embeddings. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6856–6862, Barcelona, Spain (Online). International Committee on Computational Linguistics.
- Cite (Informal):
- Exploring the Value of Personalized Word Embeddings (Welch et al., COLING 2020)
- PDF:
- https://preview.aclanthology.org/ingest-2024-clasp/2020.coling-main.604.pdf