Towards Multilingual Automatic Open-Domain Dialogue Evaluation

John Mendonca, Alon Lavie, Isabel Trancoso


Abstract
The main limiting factor in the development of robust multilingual open-domain dialogue evaluation metrics is the lack of multilingual data and the limited availability of open-sourced multilingual dialogue systems. In this work, we propose a workaround for this lack of data by leveraging a strong multilingual pretrained encoder-based Language Model and augmenting existing English dialogue data using Machine Translation. We empirically show that the naive approach of finetuning a pretrained multilingual encoder model with translated data is insufficient to outperform the strong baseline of finetuning a multilingual model with only source data. Instead, the best approach consists in the careful curation of translated data using MT Quality Estimation metrics, excluding low quality translations that hinder its performance.
Anthology ID:
2023.sigdial-1.11
Volume:
Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Month:
September
Year:
2023
Address:
Prague, Czechia
Editors:
Svetlana Stoyanchev, Shafiq Joty, David Schlangen, Ondrej Dusek, Casey Kennington, Malihe Alikhani
Venue:
SIGDIAL
SIG:
SIGDIAL
Publisher:
Association for Computational Linguistics
Note:
Pages:
130–141
Language:
URL:
https://aclanthology.org/2023.sigdial-1.11
DOI:
10.18653/v1/2023.sigdial-1.11
Bibkey:
Cite (ACL):
John Mendonca, Alon Lavie, and Isabel Trancoso. 2023. Towards Multilingual Automatic Open-Domain Dialogue Evaluation. In Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 130–141, Prague, Czechia. Association for Computational Linguistics.
Cite (Informal):
Towards Multilingual Automatic Open-Domain Dialogue Evaluation (Mendonca et al., SIGDIAL 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2023.sigdial-1.11.pdf