FineD-Eval: Fine-grained Automatic Dialogue-Level Evaluation

Chen Zhang, Luis Fernando D’Haro, Qiquan Zhang, Thomas Friedrichs, Haizhou Li


Abstract
Recent model-based reference-free metrics for open-domain dialogue evaluation exhibit promising correlations with human judgment. However, they either perform turn-level evaluation or look at a single dialogue quality dimension. One would expect a good evaluation metric to assess multiple quality dimensions at the dialogue level. To this end, we are motivated to propose a multi-dimensional dialogue-level metric, which consists of three sub-metrics with each targeting a specific dimension. The sub-metrics are trained with novel self-supervised objectives and exhibit strong correlations with human judgment for their respective dimensions. Moreover, we explore two approaches to combine the sub-metrics: metric ensemble and multitask learning. Both approaches yield a holistic metric that significantly outperforms individual sub-metrics. Compared to the existing state-of-the-art metric, the combined metrics achieve around 16% relative improvement on average across three high-quality dialogue-level evaluation benchmarks.
Anthology ID:
2022.emnlp-main.220
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3336–3355
Language:
URL:
https://aclanthology.org/2022.emnlp-main.220
DOI:
10.18653/v1/2022.emnlp-main.220
Bibkey:
Cite (ACL):
Chen Zhang, Luis Fernando D’Haro, Qiquan Zhang, Thomas Friedrichs, and Haizhou Li. 2022. FineD-Eval: Fine-grained Automatic Dialogue-Level Evaluation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 3336–3355, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
FineD-Eval: Fine-grained Automatic Dialogue-Level Evaluation (Zhang et al., EMNLP 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/2022.emnlp-main.220.pdf