Abstract
This paper reports the results of an experiment in machine translation (MT) evaluation, designed to determine whether easily/rapidly collected metrics can predict the human generated quality parameters of MT output. In this experiment we evaluated a system’s ability to translate named entities, and compared this measure with previous evaluation scores of fidelity and intelligibility. There are two significant benefits potentially associated with a correlation between traditional MT measures and named entity scores: the ability to automate named entity scoring and thus MT scoring; and insights into the linguistic aspects of task-based uses of MT, as captured in previous studies.- Anthology ID:
- 2001.mtsummit-eval.8
- Volume:
- Workshop on MT Evaluation
- Month:
- September 18-22
- Year:
- 2001
- Address:
- Santiago de Compostela, Spain
- Editors:
- Eduard Hovy, Margaret King, Sandra Manzi, Florence Reeder
- Venue:
- MTSummit
- SIG:
- Publisher:
- Note:
- Pages:
- Language:
- URL:
- https://aclanthology.org/2001.mtsummit-eval.8
- DOI:
- Cite (ACL):
- Florence Reeder, Keith Miller, Jennifer Doyon, and John White. 2001. The naming of things and the confusion of tongues: an MT metric. In Workshop on MT Evaluation, Santiago de Compostela, Spain.
- Cite (Informal):
- The naming of things and the confusion of tongues: an MT metric (Reeder et al., MTSummit 2001)
- PDF:
- https://preview.aclanthology.org/landing_page/2001.mtsummit-eval.8.pdf