Explaining Word Embeddings via Disentangled Representation

Keng-Te Liao, Cheng-Syuan Lee, Zhong-Yu Huang, Shou-de Lin


Abstract
Disentangled representations have attracted increasing attention recently. However, how to transfer the desired properties of disentanglement to word representations is unclear. In this work, we propose to transform typical dense word vectors into disentangled embeddings featuring improved interpretability via encoding polysemous semantics separately. We also found the modular structure of our disentangled word embeddings helps generate more efficient and effective features for natural language processing tasks.
Anthology ID:
2020.aacl-main.72
Volume:
Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing
Month:
December
Year:
2020
Address:
Suzhou, China
Editors:
Kam-Fai Wong, Kevin Knight, Hua Wu
Venue:
AACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
720–725
Language:
URL:
https://aclanthology.org/2020.aacl-main.72
DOI:
Bibkey:
Cite (ACL):
Keng-Te Liao, Cheng-Syuan Lee, Zhong-Yu Huang, and Shou-de Lin. 2020. Explaining Word Embeddings via Disentangled Representation. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, pages 720–725, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Explaining Word Embeddings via Disentangled Representation (Liao et al., AACL 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2020.aacl-main.72.pdf
Data
IMDb Movie Reviews