RTM Stacking Results for Machine Translation Performance Prediction

Ergun Biçici


Abstract
We obtain new results using referential translation machines with increased number of learning models in the set of results that are stacked to obtain a better mixture of experts prediction. We combine features extracted from the word-level predictions with the sentence- or document-level features, which significantly improve the results on the training sets but decrease the test set results.
Anthology ID:
W19-5405
Volume:
Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)
Month:
August
Year:
2019
Address:
Florence, Italy
Editors:
Ondřej Bojar, Rajen Chatterjee, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, André Martins, Christof Monz, Matteo Negri, Aurélie Névéol, Mariana Neves, Matt Post, Marco Turchi, Karin Verspoor
Venue:
WMT
SIG:
SIGMT
Publisher:
Association for Computational Linguistics
Note:
Pages:
73–77
Language:
URL:
https://aclanthology.org/W19-5405
DOI:
10.18653/v1/W19-5405
Bibkey:
Cite (ACL):
Ergun Biçici. 2019. RTM Stacking Results for Machine Translation Performance Prediction. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 73–77, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
RTM Stacking Results for Machine Translation Performance Prediction (Biçici, WMT 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/W19-5405.pdf