Larger-Scale Transformers for Multilingual Masked Language Modeling

Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau


Abstract
Recent work has demonstrated the effectiveness of cross-lingual language model pretraining for cross-lingual understanding. In this study, we present the results of two larger multilingual masked language models, with 3.5B and 10.7B parameters. Our two new models dubbed and outperform XLM-R by 1.8% and 2.4% average accuracy on XNLI. Our model also outperforms the RoBERTa-Large model on several English tasks of the GLUE benchmark by 0.3% on average while handling 99 more languages. This suggests larger capacity models for language understanding may obtain strong performance on high-resource languages while greatly improving low-resource languages. We make our code and models publicly available.
Anthology ID:
2021.repl4nlp-1.4
Volume:
Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021)
Month:
August
Year:
2021
Address:
Online
Editors:
Anna Rogers, Iacer Calixto, Ivan Vulić, Naomi Saphra, Nora Kassner, Oana-Maria Camburu, Trapit Bansal, Vered Shwartz
Venue:
RepL4NLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
29–33
Language:
URL:
https://aclanthology.org/2021.repl4nlp-1.4
DOI:
10.18653/v1/2021.repl4nlp-1.4
Bibkey:
Cite (ACL):
Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, and Alexis Conneau. 2021. Larger-Scale Transformers for Multilingual Masked Language Modeling. In Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021), pages 29–33, Online. Association for Computational Linguistics.
Cite (Informal):
Larger-Scale Transformers for Multilingual Masked Language Modeling (Goyal et al., RepL4NLP 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2021.repl4nlp-1.4.pdf
Data
C4CC100GLUEMLQAMultiNLIQNLISSTXQuADmC4