Exploring Supervised and Unsupervised Rewards in Machine Translation

Julia Ive, Zixu Wang, Marina Fomicheva, Lucia Specia


Abstract
Reinforcement Learning (RL) is a powerful framework to address the discrepancy between loss functions used during training and the final evaluation metrics to be used at test time. When applied to neural Machine Translation (MT), it minimises the mismatch between the cross-entropy loss and non-differentiable evaluation metrics like BLEU. However, the suitability of these metrics as reward function at training time is questionable: they tend to be sparse and biased towards the specific words used in the reference texts. We propose to address this problem by making models less reliant on such metrics in two ways: (a) with an entropy-regularised RL method that does not only maximise a reward function but also explore the action space to avoid peaky distributions; (b) with a novel RL method that explores a dynamic unsupervised reward function to balance between exploration and exploitation. We base our proposals on the Soft Actor-Critic (SAC) framework, adapting the off-policy maximum entropy model for language generation applications such as MT. We demonstrate that SAC with BLEU reward tends to overfit less to the training data and performs better on out-of-domain data. We also show that our dynamic unsupervised reward can lead to better translation of ambiguous words.
Anthology ID:
2021.eacl-main.164
Volume:
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Month:
April
Year:
2021
Address:
Online
Editors:
Paola Merlo, Jorg Tiedemann, Reut Tsarfaty
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1908–1920
Language:
URL:
https://aclanthology.org/2021.eacl-main.164
DOI:
10.18653/v1/2021.eacl-main.164
Bibkey:
Cite (ACL):
Julia Ive, Zixu Wang, Marina Fomicheva, and Lucia Specia. 2021. Exploring Supervised and Unsupervised Rewards in Machine Translation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1908–1920, Online. Association for Computational Linguistics.
Cite (Informal):
Exploring Supervised and Unsupervised Rewards in Machine Translation (Ive et al., EACL 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2021.eacl-main.164.pdf
Code
 ImperialNLP/pysimt