MT Human Evaluation – Insights & Approaches

Paula Manzur


Abstract
This session is designed to help companies and people in the business of translation evaluate MT output and to show how human translator feedback can be tweaked to make the process more objective and accurate. You will hear recommendations, insights, and takeaways on how to improve the procedure for human evaluation. When this is achieved, we can understand if the human eval study and machine metric result coheres. And we can think about what the future of translators looks like – the final “human touch” and automated MT review.”
Anthology ID:
2021.mtsummit-up.12
Volume:
Proceedings of Machine Translation Summit XVIII: Users and Providers Track
Month:
August
Year:
2021
Address:
Virtual
Editors:
Janice Campbell, Ben Huyck, Stephen Larocca, Jay Marciano, Konstantin Savenkov, Alex Yanishevsky
Venue:
MTSummit
SIG:
Publisher:
Association for Machine Translation in the Americas
Note:
Pages:
149–165
Language:
URL:
https://aclanthology.org/2021.mtsummit-up.12
DOI:
Bibkey:
Cite (ACL):
Paula Manzur. 2021. MT Human Evaluation – Insights & Approaches. In Proceedings of Machine Translation Summit XVIII: Users and Providers Track, pages 149–165, Virtual. Association for Machine Translation in the Americas.
Cite (Informal):
MT Human Evaluation – Insights & Approaches (Manzur, MTSummit 2021)
Copy Citation:
Presentation:
 2021.mtsummit-up.12.Presentation.pdf