In one hundred words or less

Florence Reeder


Abstract
This paper reports on research which aims to test the efficacy of applying automated evaluation techniques, originally designed for human second language learners, to machine translation (MT) system evaluation. We believe that such evaluation techniques will provide insight into MT evaluation, MT development, the human translation process and the human language learning process. The experiment described here looks only at the intelligibility of MT output. The evaluation technique is derived from a second language acquisition experiment that showed that assessors can differentiate native from non-native language essays in less than 100 words. Particularly illuminating for our purposes is the set of factor on which the assessors made their decisions. We duplicated this experiment to see if similar criteria could be elicited from duplicating the test using both human and machine translation outputs in the decision set. The encouraging results of this experiment, along with an analysis of language factors contributing to the successful outcomes, is presented here.
Anthology ID:
2001.mtsummit-eval.7
Volume:
Workshop on MT Evaluation
Month:
September 18-22
Year:
2001
Address:
Santiago de Compostela, Spain
Venue:
MTSummit
SIG:
Publisher:
Note:
Pages:
Language:
URL:
https://aclanthology.org/2001.mtsummit-eval.7
DOI:
Bibkey:
Cite (ACL):
Florence Reeder. 2001. In one hundred words or less. In Workshop on MT Evaluation, Santiago de Compostela, Spain.
Cite (Informal):
In one hundred words or less (Reeder, MTSummit 2001)
Copy Citation:
PDF:
https://preview.aclanthology.org/update-css-js/2001.mtsummit-eval.7.pdf