Learning a Multi-Domain Curriculum for Neural Machine Translation
Wei Wang, Ye Tian, Jiquan Ngiam, Yinfei Yang, Isaac Caswell, Zarana Parekh
Abstract
Most data selection research in machine translation focuses on improving a single domain. We perform data selection for multiple domains at once. This is achieved by carefully introducing instance-level domain-relevance features and automatically constructing a training curriculum to gradually concentrate on multi-domain relevant and noise-reduced data batches. Both the choice of features and the use of curriculum are crucial for balancing and improving all domains, including out-of-domain. In large-scale experiments, the multi-domain curriculum simultaneously reaches or outperforms the individual performance and brings solid gains over no-curriculum training.- Anthology ID:
- 2020.acl-main.689
- Volume:
- Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
- Month:
- July
- Year:
- 2020
- Address:
- Online
- Editors:
- Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 7711–7723
- Language:
- URL:
- https://aclanthology.org/2020.acl-main.689
- DOI:
- 10.18653/v1/2020.acl-main.689
- Cite (ACL):
- Wei Wang, Ye Tian, Jiquan Ngiam, Yinfei Yang, Isaac Caswell, and Zarana Parekh. 2020. Learning a Multi-Domain Curriculum for Neural Machine Translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7711–7723, Online. Association for Computational Linguistics.
- Cite (Informal):
- Learning a Multi-Domain Curriculum for Neural Machine Translation (Wang et al., ACL 2020)
- PDF:
- https://preview.aclanthology.org/landing_page/2020.acl-main.689.pdf