Self-Regulated Interactive Sequence-to-Sequence Learning

Julia Kreutzer, Stefan Riezler


Abstract
Not all types of supervision signals are created equal: Different types of feedback have different costs and effects on learning. We show how self-regulation strategies that decide when to ask for which kind of feedback from a teacher (or from oneself) can be cast as a learning-to-learn problem leading to improved cost-aware sequence-to-sequence learning. In experiments on interactive neural machine translation, we find that the self-regulator discovers an 𝜖-greedy strategy for the optimal cost-quality trade-off by mixing different feedback types including corrections, error markups, and self-supervision. Furthermore, we demonstrate its robustness under domain shift and identify it as a promising alternative to active learning.
Anthology ID:
P19-1029
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
303–315
Language:
URL:
https://aclanthology.org/P19-1029
DOI:
10.18653/v1/P19-1029
Bibkey:
Cite (ACL):
Julia Kreutzer and Stefan Riezler. 2019. Self-Regulated Interactive Sequence-to-Sequence Learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 303–315, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Self-Regulated Interactive Sequence-to-Sequence Learning (Kreutzer & Riezler, ACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-1/P19-1029.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-1/P19-1029.mp4
Code
 joeynmt/joeynmt +  additional community code