Abstract
The goal of counterfactual learning for statistical machine translation (SMT) is to optimize a target SMT system from logged data that consist of user feedback to translations that were predicted by another, historic SMT system. A challenge arises by the fact that risk-averse commercial SMT systems deterministically log the most probable translation. The lack of sufficient exploration of the SMT output space seemingly contradicts the theoretical requirements for counterfactual learning. We show that counterfactual learning from deterministic bandit logs is possible nevertheless by smoothing out deterministic components in learning. This can be achieved by additive and multiplicative control variates that avoid degenerate behavior in empirical risk minimization. Our simulation experiments show improvements of up to 2 BLEU points by counterfactual learning from deterministic bandit feedback.- Anthology ID:
- D17-1272
- Volume:
- Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
- Month:
- September
- Year:
- 2017
- Address:
- Copenhagen, Denmark
- Editors:
- Martha Palmer, Rebecca Hwa, Sebastian Riedel
- Venue:
- EMNLP
- SIG:
- SIGDAT
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 2566–2576
- Language:
- URL:
- https://aclanthology.org/D17-1272
- DOI:
- 10.18653/v1/D17-1272
- Cite (ACL):
- Carolin Lawrence, Artem Sokolov, and Stefan Riezler. 2017. Counterfactual Learning from Bandit Feedback under Deterministic Logging : A Case Study in Statistical Machine Translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2566–2576, Copenhagen, Denmark. Association for Computational Linguistics.
- Cite (Informal):
- Counterfactual Learning from Bandit Feedback under Deterministic Logging : A Case Study in Statistical Machine Translation (Lawrence et al., EMNLP 2017)
- PDF:
- https://preview.aclanthology.org/naacl24-info/D17-1272.pdf