Abstract
Pre-trained multilingual language models have become an important building block in multilingual Natural Language Processing. In the present paper, we investigate a range of such models to find out how well they transfer discourse-level knowledge across languages. This is done with a systematic evaluation on a broader set of discourse-level tasks than has been previously been assembled. We find that the XLM-RoBERTa family of models consistently show the best performance, by simultaneously being good monolingual models and degrading relatively little in a zero-shot setting. Our results also indicate that model distillation may hurt the ability of cross-lingual transfer of sentence representations, while language dissimilarity at most has a modest effect. We hope that our test suite, covering 5 tasks with a total of 22 languages in 10 distinct families, will serve as a useful evaluation platform for multilingual performance at and beyond the sentence level.- Anthology ID:
- 2021.repl4nlp-1.2
- Volume:
- Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021)
- Month:
- August
- Year:
- 2021
- Address:
- Online
- Venue:
- RepL4NLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 8–19
- Language:
- URL:
- https://aclanthology.org/2021.repl4nlp-1.2
- DOI:
- 10.18653/v1/2021.repl4nlp-1.2
- Cite (ACL):
- Murathan Kurfalı and Robert Östling. 2021. Probing Multilingual Language Models for Discourse. In Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021), pages 8–19, Online. Association for Computational Linguistics.
- Cite (Informal):
- Probing Multilingual Language Models for Discourse (Kurfalı & Östling, RepL4NLP 2021)
- PDF:
- https://preview.aclanthology.org/auto-file-uploads/2021.repl4nlp-1.2.pdf
- Data
- GLUE, MultiNLI, SQuAD, XNLI, XQuAD, x-stance