Abstract
Despite impressive empirical successes of neural machine translation (NMT) on standard benchmarks, limited parallel data impedes the application of NMT models to many language pairs. Data augmentation methods such as back-translation make it possible to use monolingual data to help alleviate these issues, but back-translation itself fails in extreme low-resource scenarios, especially for syntactically divergent languages. In this paper, we propose a simple yet effective solution, whereby target-language sentences are re-ordered to match the order of the source and used as an additional source of training-time supervision. Experiments with simulated low-resource Japanese-to-English, and real low-resource Uyghur-to-English scenarios find significant improvements over other semi-supervised alternatives.- Anthology ID:
- D19-1143
- Volume:
- Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
- Month:
- November
- Year:
- 2019
- Address:
- Hong Kong, China
- Venues:
- EMNLP | IJCNLP
- SIG:
- SIGDAT
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1388–1394
- Language:
- URL:
- https://aclanthology.org/D19-1143
- DOI:
- 10.18653/v1/D19-1143
- Cite (ACL):
- Chunting Zhou, Xuezhe Ma, Junjie Hu, and Graham Neubig. 2019. Handling Syntactic Divergence in Low-resource Machine Translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1388–1394, Hong Kong, China. Association for Computational Linguistics.
- Cite (Informal):
- Handling Syntactic Divergence in Low-resource Machine Translation (Zhou et al., EMNLP-IJCNLP 2019)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/D19-1143.pdf
- Code
- violet-zct/pytorch-reorder-nmt
- Data
- ASPEC