Quantitative evaluation of machine translation systems: sentence level

Palmira Marrafa, António Ribeiro


Abstract
This paper reports the first results of an on-going research on evaluation of Machine Translation quality. The starting point for this work was the framework of ISLE (the International Standards for Language Engineering), which provides a classification for evaluation of Machine Translation. In order to make a quantitative evaluation of translation quality, we pursue a more consistent, fine-grained and comprehensive classification of possible translation errors and we propose metrics for sentence level errors, specifically lexical and syntactic errors.
Anthology ID:
2001.mtsummit-eval.2
Volume:
Workshop on MT Evaluation
Month:
September 18-22
Year:
2001
Address:
Santiago de Compostela, Spain
Editors:
Eduard Hovy, Margaret King, Sandra Manzi, Florence Reeder
Venue:
MTSummit
SIG:
Publisher:
Note:
Pages:
Language:
URL:
https://aclanthology.org/2001.mtsummit-eval.2
DOI:
Bibkey:
Cite (ACL):
Palmira Marrafa and António Ribeiro. 2001. Quantitative evaluation of machine translation systems: sentence level. In Workshop on MT Evaluation, Santiago de Compostela, Spain.
Cite (Informal):
Quantitative evaluation of machine translation systems: sentence level (Marrafa & Ribeiro, MTSummit 2001)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2001.mtsummit-eval.2.pdf