Zeyang Wang
2020
Dynamic Curriculum Learning for Low-Resource Neural Machine Translation
Chen Xu
|
Bojie Hu
|
Yufan Jiang
|
Kai Feng
|
Zeyang Wang
|
Shen Huang
|
Qi Ju
|
Tong Xiao
|
Jingbo Zhu
Proceedings of the 28th International Conference on Computational Linguistics
Large amounts of data has made neural machine translation (NMT) a big success in recent years. But it is still a challenge if we train these models on small-scale corpora. In this case, the way of using data appears to be more important. Here, we investigate the effective use of training data for low-resource NMT. In particular, we propose a dynamic curriculum learning (DCL) method to reorder training samples in training. Unlike previous work, we do not use a static scoring function for reordering. Instead, the order of training samples is dynamically determined in two ways - loss decline and model competence. This eases training by highlighting easy samples that the current model has enough competence to learn. We test our DCL method in a Transformer-based system. Experimental results show that DCL outperforms several strong baselines on three low-resource machine translation benchmarks and different sized data of WMT’16 En-De.
2019
The NiuTrans Machine Translation Systems for WMT19
Bei Li
|
Yinqiao Li
|
Chen Xu
|
Ye Lin
|
Jiqiang Liu
|
Hui Liu
|
Ziyang Wang
|
Yuhao Zhang
|
Nuo Xu
|
Zeyang Wang
|
Kai Feng
|
Hexuan Chen
|
Tengbo Liu
|
Yanyang Li
|
Qiang Wang
|
Tong Xiao
|
Jingbo Zhu
Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
This paper described NiuTrans neural machine translation systems for the WMT 2019 news translation tasks. We participated in 13 translation directions, including 11 supervised tasks, namely EN↔{ZH, DE, RU, KK, LT}, GU→EN and the unsupervised DE↔CS sub-track. Our systems were built on Deep Transformer and several back-translation methods. Iterative knowledge distillation and ensemble+reranking were also employed to obtain stronger models. Our unsupervised submissions were based on NMT enhanced by SMT. As a result, we achieved the highest BLEU scores in {KK↔EN, GU→EN} directions, ranking 2nd in {RU→EN, DE↔CS} and 3rd in {ZH→EN, LT→EN, EN→RU, EN↔DE} among all constrained submissions.