CLUSE: Cross-Lingual Unsupervised Sense Embeddings

Ta-Chung Chi, Yun-Nung Chen


Abstract
This paper proposes a modularized sense induction and representation learning model that jointly learns bilingual sense embeddings that align well in the vector space, where the cross-lingual signal in the English-Chinese parallel corpus is exploited to capture the collocation and distributed characteristics in the language pair. The model is evaluated on the Stanford Contextual Word Similarity (SCWS) dataset to ensure the quality of monolingual sense embeddings. In addition, we introduce Bilingual Contextual Word Similarity (BCWS), a large and high-quality dataset for evaluating cross-lingual sense embeddings, which is the first attempt of measuring whether the learned embeddings are indeed aligned well in the vector space. The proposed approach shows the superior quality of sense embeddings evaluated in both monolingual and bilingual spaces.
Anthology ID:
D18-1025
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
271–281
Language:
URL:
https://aclanthology.org/D18-1025
DOI:
10.18653/v1/D18-1025
Bibkey:
Cite (ACL):
Ta-Chung Chi and Yun-Nung Chen. 2018. CLUSE: Cross-Lingual Unsupervised Sense Embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 271–281, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
CLUSE: Cross-Lingual Unsupervised Sense Embeddings (Chi & Chen, EMNLP 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp22-frontmatter/D18-1025.pdf
Code
 MiuLab/CLUSE