Abstract
Self-supervised neural machine translation (SSNMT) jointly learns to identify and select suitable training data from comparable (rather than parallel) corpora and to translate, in a way that the two tasks support each other in a virtuous circle. In this study, we provide an in-depth analysis of the sampling choices the SSNMT model makes during training. We show how, without it having been told to do so, the model self-selects samples of increasing (i) complexity and (ii) task-relevance in combination with (iii) performing a denoising curriculum. We observe that the dynamics of the mutual-supervision signals of both system internal representation types are vital for the extraction and translation performance. We show that in terms of the Gunning-Fog Readability index, SSNMT starts extracting and learning from Wikipedia data suitable for high school students and quickly moves towards content suitable for first year undergraduate students.- Anthology ID:
- 2020.emnlp-main.202
- Volume:
- Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 2560–2571
- Language:
- URL:
- https://aclanthology.org/2020.emnlp-main.202
- DOI:
- 10.18653/v1/2020.emnlp-main.202
- Cite (ACL):
- Dana Ruiter, Josef van Genabith, and Cristina España-Bonet. 2020. Self-Induced Curriculum Learning in Self-Supervised Neural Machine Translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2560–2571, Online. Association for Computational Linguistics.
- Cite (Informal):
- Self-Induced Curriculum Learning in Self-Supervised Neural Machine Translation (Ruiter et al., EMNLP 2020)
- PDF:
- https://preview.aclanthology.org/nodalida-main-page/2020.emnlp-main.202.pdf
- Data
- WikiMatrix