Polyglot Contextual Representations Improve Crosslingual Transfer

Phoebe Mulcaire, Jungo Kasai, Noah A. Smith


Abstract
We introduce Rosita, a method to produce multilingual contextual word representations by training a single language model on text from multiple languages. Our method combines the advantages of contextual word representations with those of multilingual representation learning. We produce language models from dissimilar language pairs (English/Arabic and English/Chinese) and use them in dependency parsing, semantic role labeling, and named entity recognition, with comparisons to monolingual and non-contextual variants. Our results provide further evidence for the benefits of polyglot learning, in which representations are shared across multiple languages.
Anthology ID:
N19-1392
Volume:
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Editors:
Jill Burstein, Christy Doran, Thamar Solorio
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3912–3918
Language:
URL:
https://aclanthology.org/N19-1392
DOI:
10.18653/v1/N19-1392
Bibkey:
Cite (ACL):
Phoebe Mulcaire, Jungo Kasai, and Noah A. Smith. 2019. Polyglot Contextual Representations Improve Crosslingual Transfer. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3912–3918, Minneapolis, Minnesota. Association for Computational Linguistics.
Cite (Informal):
Polyglot Contextual Representations Improve Crosslingual Transfer (Mulcaire et al., NAACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-1/N19-1392.pdf
Code
 pmulcaire/rosita
Data
OntoNotes 5.0