Single Training Dimension Selection for Word Embedding with PCA

Yu Wang


Abstract
In this paper, we present a fast and reliable method based on PCA to select the number of dimensions for word embeddings. First, we train one embedding with a generous upper bound (e.g. 1,000) of dimensions. Then we transform the embeddings using PCA and incrementally remove the lesser dimensions one at a time while recording the embeddings’ performance on language tasks. Lastly, we select the number of dimensions, balancing model size and accuracy. Experiments using various datasets and language tasks demonstrate that we are able to train about 10 times fewer sets of embeddings while retaining optimal performance. Researchers interested in training the best-performing embeddings for downstream tasks, such as sentiment analysis, question answering and hypernym extraction, as well as those interested in embedding compression should find the method helpful.
Anthology ID:
D19-1369
Volume:
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Kentaro Inui, Jing Jiang, Vincent Ng, Xiaojun Wan
Venues:
EMNLP | IJCNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
3597–3602
Language:
URL:
https://aclanthology.org/D19-1369
DOI:
10.18653/v1/D19-1369
Bibkey:
Cite (ACL):
Yu Wang. 2019. Single Training Dimension Selection for Word Embedding with PCA. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3597–3602, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Single Training Dimension Selection for Word Embedding with PCA (Wang, EMNLP-IJCNLP 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/improve-issue-templates/D19-1369.pdf
Data
WikiText-103WikiText-2