Knowledge Distillation for Bilingual Dictionary Induction

Ndapandula Nakashole, Raphael Flauger


Abstract
Leveraging zero-shot learning to learn mapping functions between vector spaces of different languages is a promising approach to bilingual dictionary induction. However, methods using this approach have not yet achieved high accuracy on the task. In this paper, we propose a bridging approach, where our main contribution is a knowledge distillation training objective. As teachers, rich resource translation paths are exploited in this role. And as learners, translation paths involving low resource languages learn from the teachers. Our training objective allows seamless addition of teacher translation paths for any given low resource pair. Since our approach relies on the quality of monolingual word embeddings, we also propose to enhance vector representations of both the source and target language with linguistic information. Our experiments on various languages show large performance gains from our distillation training objective, obtaining as high as 17% accuracy improvements.
Anthology ID:
D17-1264
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Martha Palmer, Rebecca Hwa, Sebastian Riedel
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
2497–2506
Language:
URL:
https://aclanthology.org/D17-1264
DOI:
10.18653/v1/D17-1264
Bibkey:
Cite (ACL):
Ndapandula Nakashole and Raphael Flauger. 2017. Knowledge Distillation for Bilingual Dictionary Induction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2497–2506, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Knowledge Distillation for Bilingual Dictionary Induction (Nakashole & Flauger, EMNLP 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-dup-bibkey/D17-1264.pdf