Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word Alignment
Zewen Chi, Li Dong, Bo Zheng, Shaohan Huang, Xian-Ling Mao, Heyan Huang, Furu Wei
Abstract
The cross-lingual language models are typically pretrained with masked language modeling on multilingual text or parallel sentences. In this paper, we introduce denoising word alignment as a new cross-lingual pre-training task. Specifically, the model first self-label word alignments for parallel sentences. Then we randomly mask tokens in a bitext pair. Given a masked token, the model uses a pointer network to predict the aligned token in the other language. We alternately perform the above two steps in an expectation-maximization manner. Experimental results show that our method improves cross-lingual transferability on various datasets, especially on the token-level tasks, such as question answering, and structured prediction. Moreover, the model can serve as a pretrained word aligner, which achieves reasonably low error rate on the alignment benchmarks. The code and pretrained parameters are available at github.com/CZWin32768/XLM-Align.- Anthology ID:
- 2021.acl-long.265
- Volume:
- Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
- Month:
- August
- Year:
- 2021
- Address:
- Online
- Editors:
- Chengqing Zong, Fei Xia, Wenjie Li, Roberto Navigli
- Venues:
- ACL | IJCNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 3418–3430
- Language:
- URL:
- https://aclanthology.org/2021.acl-long.265
- DOI:
- 10.18653/v1/2021.acl-long.265
- Cite (ACL):
- Zewen Chi, Li Dong, Bo Zheng, Shaohan Huang, Xian-Ling Mao, Heyan Huang, and Furu Wei. 2021. Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word Alignment. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3418–3430, Online. Association for Computational Linguistics.
- Cite (Informal):
- Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word Alignment (Chi et al., ACL-IJCNLP 2021)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2021.acl-long.265.pdf
- Code
- CZWin32768/XLM-Align
- Data
- MLQA, PAWS-X, TyDiQA, XNLI, XQuAD