Learning from Chunk-based Feedback in Neural Machine Translation

Pavel Petrushkov, Shahram Khadivi, Evgeny Matusov


Abstract
We empirically investigate learning from partial feedback in neural machine translation (NMT), when partial feedback is collected by asking users to highlight a correct chunk of a translation. We propose a simple and effective way of utilizing such feedback in NMT training. We demonstrate how the common machine translation problem of domain mismatch between training and deployment can be reduced solely based on chunk-level user feedback. We conduct a series of simulation experiments to test the effectiveness of the proposed method. Our results show that chunk-level feedback outperforms sentence based feedback by up to 2.61% BLEU absolute.
Anthology ID:
P18-2052
Volume:
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Iryna Gurevych, Yusuke Miyao
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
326–331
Language:
URL:
https://aclanthology.org/P18-2052
DOI:
10.18653/v1/P18-2052
Bibkey:
Cite (ACL):
Pavel Petrushkov, Shahram Khadivi, and Evgeny Matusov. 2018. Learning from Chunk-based Feedback in Neural Machine Translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 326–331, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Learning from Chunk-based Feedback in Neural Machine Translation (Petrushkov et al., ACL 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/P18-2052.pdf