Unsupervised Bilingual Lexicon Induction via Latent Variable Models

Zi-Yi Dou, Zhi-Hao Zhou, Shujian Huang


Abstract
Bilingual lexicon extraction has been studied for decades and most previous methods have relied on parallel corpora or bilingual dictionaries. Recent studies have shown that it is possible to build a bilingual dictionary by aligning monolingual word embedding spaces in an unsupervised way. With the recent advances in generative models, we propose a novel approach which builds cross-lingual dictionaries via latent variable models and adversarial training with no parallel corpora. To demonstrate the effectiveness of our approach, we evaluate our approach on several language pairs and the experimental results show that our model could achieve competitive and even superior performance compared with several state-of-the-art models.
Anthology ID:
D18-1062
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
621–626
Language:
URL:
https://aclanthology.org/D18-1062
DOI:
10.18653/v1/D18-1062
Bibkey:
Cite (ACL):
Zi-Yi Dou, Zhi-Hao Zhou, and Shujian Huang. 2018. Unsupervised Bilingual Lexicon Induction via Latent Variable Models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 621–626, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Unsupervised Bilingual Lexicon Induction via Latent Variable Models (Dou et al., EMNLP 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/D18-1062.pdf
Video:
 https://vimeo.com/305211999