Elena Murgolo


2022

pdf
A Quality Estimation and Quality Evaluation Tool for the Translation Industry
Elena Murgolo | Javad Pourmostafa Roshan Sharami | Dimitar Shterionov
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation

With the increase in machine translation (MT) quality over the latest years, it has now become a common practice to integrate MT in the workflow of language service providers (LSPs) and other actors in the translation industry. With MT having a direct impact on the translation workflow, it is important not only to use high-quality MT systems, but also to understand the quality dimension so that the humans involved in the translation workflow can make informed decisions. The evaluation and monitoring of MT output quality has become one of the essential aspects of language technology management in LSPs’ workflows. First, a general practice is to carry out human tests to evaluate MT output quality before deployment. Second, a quality estimate of the translated text, thus after deployment, can inform post editors or even represent post-editing effort. In the former case, based on the quality assessment of a candidate engine, an informed decision can be made whether the engine would be deployed for production or not. In the latter, a quality estimate of the translation output can guide the human post-editor or even make rough approximations of the post-editing effort. Quality of an MT engine can be assessed on document- or on sentence-level. A tool to jointly provide all these functionalities does not exist yet. The overall objective of the project presented in this paper is to develop an MT quality assessment (MTQA) tool that simplifies the quality assessment of MT engines, combining quality evaluation and quality estimation on document- and sentence- level.

2021


Lab vs. Production: Two Approaches to Productivity Evaluation for MTPE for LSP
Elena Murgolo
Proceedings of Machine Translation Summit XVIII: Users and Providers Track

In the paper we propose both kind of tests as viable post-editing productivity evaluation solutions as they both deliver a clear overview of the difference in speed between HT and PE of the translators involved. The decision on whether to use the first approach or the second can be based on a number of factors, such as: availability of actual orders in the domain and language combination to be tested; time; availability of Post-editors in the domain and in the language combination to be tested. The aim of this paper will be to show that both methodologies can be useful in different settings for a preliminary evaluation of possible productivity gain with MTPE.

pdf bib
Proceedings of the Translation and Interpreting Technology Online Conference
Ruslan Mitkov | Vilelmini Sosoni | Julie Christine Giguère | Elena Murgolo | Elizabeth Deysel
Proceedings of the Translation and Interpreting Technology Online Conference

2019

pdf
MTPE in Patents: A Successful Business Story
Valeria Premoli | Elena Murgolo | Diego Cresceri
Proceedings of Machine Translation Summit XVII: Translator, Project and User Tracks