Abstract
Multilingual pre-trained language models, such as mBERT and XLM-R, have shown impressive cross-lingual ability. Surprisingly, both of them use multilingual masked language model (MLM) without any cross-lingual supervision or aligned data. Despite the encouraging results, we still lack a clear understanding of why cross-lingual ability could emerge from multilingual MLM. In our work, we argue that cross-language ability comes from the commonality between languages. Specifically, we study three language properties: constituent order, composition and word co-occurrence. First, we create an artificial language by modifying property in source language. Then we study the contribution of modified property through the change of cross-language transfer results on target language. We conduct experiments on six languages and two cross-lingual NLP tasks (textual entailment, sentence retrieval). Our main conclusion is that the contribution of constituent order and word co-occurrence is limited, while the composition is more crucial to the success of cross-linguistic transfer.- Anthology ID:
- 2022.acl-long.322
- Volume:
- Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
- Month:
- May
- Year:
- 2022
- Address:
- Dublin, Ireland
- Editors:
- Smaranda Muresan, Preslav Nakov, Aline Villavicencio
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 4702–4712
- Language:
- URL:
- https://aclanthology.org/2022.acl-long.322
- DOI:
- 10.18653/v1/2022.acl-long.322
- Cite (ACL):
- Yuan Chai, Yaobo Liang, and Nan Duan. 2022. Cross-Lingual Ability of Multilingual Masked Language Models: A Study of Language Structure. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4702–4712, Dublin, Ireland. Association for Computational Linguistics.
- Cite (Informal):
- Cross-Lingual Ability of Multilingual Masked Language Models: A Study of Language Structure (Chai et al., ACL 2022)
- PDF:
- https://preview.aclanthology.org/proper-vol2-ingestion/2022.acl-long.322.pdf
- Data
- XNLI