Improving Contextual Quality Models for MT Evaluation Based on Evaluators’ Feedback

Paula Estrella, Andrei Popescu-Belis, Maghi King


Abstract
The Framework for the Evaluation for Machine Translation (FEMTI) contains guidelines for building a quality model that is used to evaluate MT systems in relation to the purpose and intended context of use of the systems. Contextual quality models can thus be constructed, but entering into FEMTI the knowledge required for this operation is a complex task. An experiment has been set up in order to transfer knowledge from MT evaluation experts into the FEMTI guidelines, by polling experts about the evaluation methods they would use in a particular context, then inferring from the results generic relations between characteristics of the context of use and quality characteristics. The results of this hands-on exercise, carried out as part of a conference tutorial, have served to refine FEMTI’s “generic contextual quality model” and to obtain feedback on the FEMTI guidelines in general.
Anthology ID:
L08-1401
Volume:
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)
Month:
May
Year:
2008
Address:
Marrakech, Morocco
Venue:
LREC
SIG:
Publisher:
European Language Resources Association (ELRA)
Note:
Pages:
Language:
URL:
http://www.lrec-conf.org/proceedings/lrec2008/pdf/236_paper.pdf
DOI:
Bibkey:
Cite (ACL):
Paula Estrella, Andrei Popescu-Belis, and Maghi King. 2008. Improving Contextual Quality Models for MT Evaluation Based on Evaluators’ Feedback. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08), Marrakech, Morocco. European Language Resources Association (ELRA).
Cite (Informal):
Improving Contextual Quality Models for MT Evaluation Based on Evaluators’ Feedback (Estrella et al., LREC 2008)
Copy Citation:
PDF:
http://www.lrec-conf.org/proceedings/lrec2008/pdf/236_paper.pdf