Unsupervised Neural Machine Translation with Future Rewarding

Xiangpeng Wei, Yue Hu, Luxi Xing, Li Gao


Abstract
In this paper, we alleviate the local optimality of back-translation by learning a policy (takes the form of an encoder-decoder and is defined by its parameters) with future rewarding under the reinforcement learning framework, which aims to optimize the global word predictions for unsupervised neural machine translation. To this end, we design a novel reward function to characterize high-quality translations from two aspects: n-gram matching and semantic adequacy. The n-gram matching is defined as an alternative for the discrete BLEU metric, and the semantic adequacy is used to measure the adequacy of conveying the meaning of the source sentence to the target. During training, our model strives for earning higher rewards by learning to produce grammatically more accurate and semantically more adequate translations. Besides, a variational inference network (VIN) is proposed to constrain the corresponding sentences in two languages have the same or similar latent semantic code. On the widely used WMT’14 English-French, WMT’16 English-German and NIST Chinese-to-English benchmarks, our models respectively obtain 27.59/27.15, 19.65/23.42 and 22.40 BLEU points without using any labeled data, demonstrating consistent improvements over previous unsupervised NMT models.
Anthology ID:
K19-1027
Volume:
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Mohit Bansal, Aline Villavicencio
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
281–290
Language:
URL:
https://aclanthology.org/K19-1027
DOI:
10.18653/v1/K19-1027
Bibkey:
Cite (ACL):
Xiangpeng Wei, Yue Hu, Luxi Xing, and Li Gao. 2019. Unsupervised Neural Machine Translation with Future Rewarding. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 281–290, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Unsupervised Neural Machine Translation with Future Rewarding (Wei et al., CoNLL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/K19-1027.pdf