Using Variable Decoding Weight for Language Model in Statistical Machine Translation

Behrang Mohit, Rebecca Hwa, Alon Lavie


Abstract
This paper investigates varying the decoder weight of the language model (LM) when translating different parts of a sentence. We determine the condition under which the LM weight should be adapted. We find that a better translation can be achieved by varying the LM weight when decoding the most problematic spot in a sentence, which we refer to as a difficult segment. Two adaptation strategies are proposed and compared through experiments. We find that adapting a different LM weight for every difficult segment resulted in the largest improvement in translation quality.
Anthology ID:
2010.amta-papers.17
Volume:
Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Research Papers
Month:
October 31-November 4
Year:
2010
Address:
Denver, Colorado, USA
Venue:
AMTA
SIG:
Publisher:
Association for Machine Translation in the Americas
Note:
Pages:
Language:
URL:
https://aclanthology.org/2010.amta-papers.17
DOI:
Bibkey:
Cite (ACL):
Behrang Mohit, Rebecca Hwa, and Alon Lavie. 2010. Using Variable Decoding Weight for Language Model in Statistical Machine Translation. In Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Research Papers, Denver, Colorado, USA. Association for Machine Translation in the Americas.
Cite (Informal):
Using Variable Decoding Weight for Language Model in Statistical Machine Translation (Mohit et al., AMTA 2010)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp22-frontmatter/2010.amta-papers.17.pdf