Abstract
Creating accurate meta-embeddings from pre-trained source embeddings has received attention lately. Methods based on global and locally-linear transformation and concatenation have shown to produce accurate meta-embeddings. In this paper, we show that the arithmetic mean of two distinct word embedding sets yields a performant meta-embedding that is comparable or better than more complex meta-embedding learning methods. The result seems counter-intuitive given that vector spaces in different source embeddings are not comparable and cannot be simply averaged. We give insight into why averaging can still produce accurate meta-embedding despite the incomparability of the source vector spaces.- Anthology ID:
- N18-2031
- Volume:
- Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)
- Month:
- June
- Year:
- 2018
- Address:
- New Orleans, Louisiana
- Editors:
- Marilyn Walker, Heng Ji, Amanda Stent
- Venue:
- NAACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 194–198
- Language:
- URL:
- https://aclanthology.org/N18-2031
- DOI:
- 10.18653/v1/N18-2031
- Cite (ACL):
- Joshua Coates and Danushka Bollegala. 2018. Frustratingly Easy Meta-Embedding – Computing Meta-Embeddings by Averaging Source Word Embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 194–198, New Orleans, Louisiana. Association for Computational Linguistics.
- Cite (Informal):
- Frustratingly Easy Meta-Embedding – Computing Meta-Embeddings by Averaging Source Word Embeddings (Coates & Bollegala, NAACL 2018)
- PDF:
- https://preview.aclanthology.org/emnlp22-frontmatter/N18-2031.pdf