2025
pdf
bib
abs
Context-Aware Monolingual Evaluation of Machine Translation
Silvio Picinini
|
Sheila Castilho
Proceedings of Machine Translation Summit XX: Volume 1
This paper explores the potential of context-aware monolingual evaluation for assessing machine translation (MT) when no source is given for reference. To this end, we compare monolingual with bilingual evaluations (with source text), under two scenarios: the evaluation of a single MT system, and the comparative evaluation of pairwise MT systems. Four professional translators performed both monolingual and bilingual evaluations by assigning ratings and annotating errors, and providing feedback on their experience. Our findings suggest that context-aware monolingual evaluation achieves comparable outcomes to bilingual evaluations, and highlight the feasibility and potential of monolingual evaluation as an efficient approach to assessing MT.
2022
bib
abs
Improving Consistency of Human and Machine Translations
Silvio Picinini
Proceedings of the 15th Biennial Conference of the Association for Machine Translation in the Americas (Volume 2: Users and Providers Track and Government Track)
Consistency is one of the desired quality features in final translations. For human-only translations (without MT), we rely on the translator’s ability to achieve consistency. For MT, consistency is neither guaranteed nor expected. MT may actually generate inconsistencies, and it is left to the post-editor to introduce consistency in a manual fashion. This work presents a method that facilitates the improvement of consistency without the need of a glossary. It detects inconsistencies in the post-edited work, and gives the post-editor the opportunity to fix the translation towards consistency. We describe the method, which is simple and involves only a short Python script, and also provide numbers that show its positive impact. This method is a contribution to a broader set of quality checks that can improve language quality of human and MT translations.
2021
bib
abs
A Review for Large Volumes of Post-edited Data
Silvio Picinini
Proceedings of Machine Translation Summit XVIII: Users and Providers Track
Interested in being more confident about the quality of your post-edited data? This is a session to learn how to create a Longitudinal Review that looks at specific aspects of quality in a systematic way, for the entire content and not just for a sample. Are you a project manager for a multilingual project? The Longitudinal Review can give insights to help project management, even if you are not a speaker of the target language. And it can help you detect issues that a Sample Review may not detect. Please come learn more about this new way to look at review.
2020
bib
A language comparison of Human Evaluation and Quality Estimation
Silvio Picinini
|
Adam Bittlingmayer
Proceedings of the 14th Conference of the Association for Machine Translation in the Americas (Volume 2: User Track)
2018
pdf
bib
Tutorial: Corpora Quality Management for MT - Practices and Roles
Silvio Picinini
|
Pete Smith
|
Nicola Ueffing
Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 2: User Track)
2017
pdf
bib
A detailed investigation of Bias Errors in Post-editing of MT output
Silvio Picinini
|
Nicola Ueffing
Proceedings of Machine Translation Summit XVI: Commercial MT Users and Translators Track
pdf
bib
Harvesting Polysemous Terms from e-commerce Data to Enhance QA
Silvio Picinini
Proceedings of Machine Translation Summit XVI: Commercial MT Users and Translators Track
2014
bib
Challenges of machine translation for user generated content: queries from Brazilian users
Silvio Picinini
Proceedings of the 11th Conference of the Association for Machine Translation in the Americas: MT Users Track