Neural Network Language Models for Candidate Scoring in Hybrid Multi-System Machine Translation

Matīss Rikters

[How to correct problems with metadata yourself]


Abstract
This paper presents the comparison of how using different neural network based language modeling tools for selecting the best candidate fragments affects the final output translation quality in a hybrid multi-system machine translation setup. Experiments were conducted by comparing perplexity and BLEU scores on common test cases using the same training data set. A 12-gram statistical language model was selected as a baseline to oppose three neural network based models of different characteristics. The models were integrated in a hybrid system that depends on the perplexity score of a sentence fragment to produce the best fitting translations. The results show a correlation between language model perplexity and BLEU scores as well as overall improvements in BLEU.
Anthology ID:
W16-4502
Volume:
Proceedings of the Sixth Workshop on Hybrid Approaches to Translation (HyTra6)
Month:
December
Year:
2016
Address:
Osaka, Japan
Editors:
Patrik Lambert, Bogdan Babych, Kurt Eberle, Rafael E. Banchs, Reinhard Rapp, Marta R. Costa-jussà
Venue:
HyTra
SIG:
Publisher:
The COLING 2016 Organizing Committee
Note:
Pages:
8–15
Language:
URL:
https://aclanthology.org/W16-4502
DOI:
Bibkey:
Cite (ACL):
Matīss Rikters. 2016. Neural Network Language Models for Candidate Scoring in Hybrid Multi-System Machine Translation. In Proceedings of the Sixth Workshop on Hybrid Approaches to Translation (HyTra6), pages 8–15, Osaka, Japan. The COLING 2016 Organizing Committee.
Cite (Informal):
Neural Network Language Models for Candidate Scoring in Hybrid Multi-System Machine Translation (Rikters, HyTra 2016)
Copy Citation:
PDF:
https://preview.aclanthology.org/teach-a-man-to-fish/W16-4502.pdf