Continuous multilinguality with language vectors

Robert Östling, Jörg Tiedemann


Abstract
Most existing models for multilingual natural language processing (NLP) treat language as a discrete category, and make predictions for either one language or the other. In contrast, we propose using continuous vector representations of language. We show that these can be learned efficiently with a character-based neural language model, and used to improve inference about language varieties not seen during training. In experiments with 1303 Bible translations into 990 different languages, we empirically explore the capacity of multilingual language models, and also show that the language vectors capture genetic relationships between languages.
Anthology ID:
E17-2102
Volume:
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
Month:
April
Year:
2017
Address:
Valencia, Spain
Editors:
Mirella Lapata, Phil Blunsom, Alexander Koller
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
644–649
Language:
URL:
https://aclanthology.org/E17-2102
DOI:
Bibkey:
Cite (ACL):
Robert Östling and Jörg Tiedemann. 2017. Continuous multilinguality with language vectors. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 644–649, Valencia, Spain. Association for Computational Linguistics.
Cite (Informal):
Continuous multilinguality with language vectors (Östling & Tiedemann, EACL 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-1/E17-2102.pdf