Abstract
The standard machine translation evaluation framework measures the single-best output of machine translation systems. There are, however, many situations where n-best lists are needed, yet there is no established way of evaluating them. This paper establishes a framework for addressing n-best evaluation by outlining three different questions one could consider when determining how one would define a ‘good’ n-best list and proposing evaluation measures for each question. The first and principal contribution is an evaluation measure that characterizes the translation quality of an entire n-best list by asking whether many of the valid translations are placed near the top of the list. The second is a measure that uses gold translations with preference annotations to ask to what degree systems can produce ranked lists in preference order. The third is a measure that rewards partial matches, evaluating the closeness of the many items in an n-best list to a set of many valid references. These three perspectives make clear that having access to many references can be useful when n-best evaluation is the goal.- Anthology ID:
- 2020.eval4nlp-1.7
- Volume:
- Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Venue:
- Eval4NLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 60–68
- Language:
- URL:
- https://aclanthology.org/2020.eval4nlp-1.7
- DOI:
- 10.18653/v1/2020.eval4nlp-1.7
- Cite (ACL):
- Jacob Bremerman, Huda Khayrallah, Douglas Oard, and Matt Post. 2020. On the Evaluation of Machine Translation n-best Lists. In Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems, pages 60–68, Online. Association for Computational Linguistics.
- Cite (Informal):
- On the Evaluation of Machine Translation n-best Lists (Bremerman et al., Eval4NLP 2020)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/2020.eval4nlp-1.7.pdf
- Data
- Duolingo STAPLE Shared Task