A Mixture Model for Learning Multi-Sense Word Embeddings
Dai Quoc Nguyen, Dat Quoc Nguyen, Ashutosh Modi, Stefan Thater, Manfred Pinkal
Abstract
Word embeddings are now a standard technique for inducing meaning representations for words. For getting good representations, it is important to take into account different senses of a word. In this paper, we propose a mixture model for learning multi-sense word embeddings. Our model generalizes the previous works in that it allows to induce different weights of different senses of a word. The experimental results show that our model outperforms previous models on standard evaluation tasks.- Anthology ID:
- S17-1015
- Volume:
- Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM 2017)
- Month:
- August
- Year:
- 2017
- Address:
- Vancouver, Canada
- Editors:
- Nancy Ide, Aurélie Herbelot, Lluís Màrquez
- Venue:
- *SEM
- SIGs:
- SIGLEX | SIGSEM
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 121–127
- Language:
- URL:
- https://aclanthology.org/S17-1015
- DOI:
- 10.18653/v1/S17-1015
- Cite (ACL):
- Dai Quoc Nguyen, Dat Quoc Nguyen, Ashutosh Modi, Stefan Thater, and Manfred Pinkal. 2017. A Mixture Model for Learning Multi-Sense Word Embeddings. In Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM 2017), pages 121–127, Vancouver, Canada. Association for Computational Linguistics.
- Cite (Informal):
- A Mixture Model for Learning Multi-Sense Word Embeddings (Nguyen et al., *SEM 2017)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-2/S17-1015.pdf