BLEU, METEOR, BERTScore: Evaluation of Metrics Performance in Assessing Critical Translation Errors in Sentiment-Oriented Text

Hadeel Saadany, Constantin Orasan


Abstract
Social media companies as well as censorship authorities make extensive use of artificial intelligence (AI) tools to monitor postings of hate speech, celebrations of violence or profanity. Since AI software requires massive volumes of data to train computers, automatic-translation of the online content is usually implemented to compensate for the scarcity of text in some languages. However, machine translation (MT) mistakes are a regular occurrence when translating sentiment-oriented user-generated content (UGC), especially when a low-resource language is involved. In such scenarios, the adequacy of the whole process relies on the assumption that the translation can be evaluated correctly. In this paper, we assess the ability of automatic quality metrics to detect critical machine translation errors which can cause serious misunderstanding of the affect message. We compare the performance of three canonical metrics on meaningless translations as compared to meaningful translations with a critical error that distorts the overall sentiment of the source text. We demonstrate the need for the fine-tuning of automatic metrics to make them more robust in detecting sentiment critical errors.
Anthology ID:
2021.triton-1.6
Volume:
Proceedings of the Translation and Interpreting Technology Online Conference
Month:
July
Year:
2021
Address:
Held Online
Editors:
Ruslan Mitkov, Vilelmini Sosoni, Julie Christine Giguère, Elena Murgolo, Elizabeth Deysel
Venue:
TRITON
SIG:
Publisher:
INCOMA Ltd.
Note:
Pages:
48–56
Language:
URL:
https://aclanthology.org/2021.triton-1.6
DOI:
Bibkey:
Cite (ACL):
Hadeel Saadany and Constantin Orasan. 2021. BLEU, METEOR, BERTScore: Evaluation of Metrics Performance in Assessing Critical Translation Errors in Sentiment-Oriented Text. In Proceedings of the Translation and Interpreting Technology Online Conference, pages 48–56, Held Online. INCOMA Ltd..
Cite (Informal):
BLEU, METEOR, BERTScore: Evaluation of Metrics Performance in Assessing Critical Translation Errors in Sentiment-Oriented Text (Saadany & Orasan, TRITON 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2021.triton-1.6.pdf