DOCmT5: Document-Level Pretraining of Multilingual Language Models

Chia-Hsuan Lee, Aditya Siddhant, Viresh Ratnakar, Melvin Johnson


Abstract
In this paper, we introduce DOCmT5, a multilingual sequence-to-sequence language model pretrained with large-scale parallel documents. While previous approaches have focused on leveraging sentence-level parallel data, we try to build a general-purpose pretrained model that can understand and generate long documents. We propose a simple and effective pretraining objective - Document reordering Machine Translation (DrMT), in which the input documents that are shuffled and masked need to be translated. DrMT brings consistent improvements over strong baselines on a variety of document-level generation tasks, including over 12 BLEU points for seen-language pair document-level MT, over 7 BLEU points for unseen-language-pair document-level MT and over 3 ROUGE-1 points for seen-language pair cross-lingual summarization. We achieve state-of-the-art (SOTA) on WMT20 De-En and IWSLT15 Zh-En document translation tasks. We also conduct extensive analysis on various factors for document pretraining, including (1) the effects of pretraining data quality and (2) The effects of combining mono-lingual and cross-lingual pretraining. We plan to make our model checkpoints publicly available.
Anthology ID:
2022.findings-naacl.32
Volume:
Findings of the Association for Computational Linguistics: NAACL 2022
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
425–437
Language:
URL:
https://aclanthology.org/2022.findings-naacl.32
DOI:
10.18653/v1/2022.findings-naacl.32
Bibkey:
Cite (ACL):
Chia-Hsuan Lee, Aditya Siddhant, Viresh Ratnakar, and Melvin Johnson. 2022. DOCmT5: Document-Level Pretraining of Multilingual Language Models. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 425–437, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
DOCmT5: Document-Level Pretraining of Multilingual Language Models (Lee et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2022.findings-naacl.32.pdf
Video:
 https://preview.aclanthology.org/emnlp-22-attachments/2022.findings-naacl.32.mp4
Data
C4IWSLT2015WMT 2020WikiLinguamC4