Statistical machine translation without long parallel sentences for training data.

Jin’ichi Murakami, Masato Tokuhisa, Satoru Ikehara


Abstract
In this study, we paid attention to the reliability of phrase table. We have been used the phrase table using Och’s method[2]. And this method sometimes generate completely wrong phrase tables. We found that such phrase table caused by long parallel sentences. Therefore, we removed these long parallel sentences from training data. Also, we utilized general tools for statistical machine translation, such as ”Giza++”[3], ”moses”[4], and ”training-phrase-model.perl”[5]. We obtained a BLEU score of 0.4047 (TEXT) and 0.3553(1-BEST) of the Challenge-EC task for our proposed method. On the other hand, we obtained a BLEU score of 0.3975(TEXT) and 0.3482(1-BEST) of the Challenge-EC task for a standard method. This means that our proposed method was effective for the Challenge-EC task. However, it was not effective for the BTECT-CE and Challenge-CE tasks. And our system was not good performance. For example, our system was the 7th place among 8 system for Challenge-EC task.
Anthology ID:
2008.iwslt-evaluation.19
Volume:
Proceedings of the 5th International Workshop on Spoken Language Translation: Evaluation Campaign
Month:
October 20-21
Year:
2008
Address:
Waikiki, Hawaii
Venue:
IWSLT
SIG:
SIGSLT
Publisher:
Note:
Pages:
132–137
Language:
URL:
https://aclanthology.org/2008.iwslt-evaluation.19
DOI:
Bibkey:
Cite (ACL):
Jin’ichi Murakami, Masato Tokuhisa, and Satoru Ikehara. 2008. Statistical machine translation without long parallel sentences for training data.. In Proceedings of the 5th International Workshop on Spoken Language Translation: Evaluation Campaign, pages 132–137, Waikiki, Hawaii.
Cite (Informal):
Statistical machine translation without long parallel sentences for training data. (Murakami et al., IWSLT 2008)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2008.iwslt-evaluation.19.pdf