DiQAD: A Benchmark Dataset for Open-domain Dialogue Quality Assessment

Yukun Zhao, Lingyong Yan, Weiwei Sun, Chong Meng, Shuaiqiang Wang, Zhicong Cheng, Zhaochun Ren, Dawei Yin


Abstract
Dialogue assessment plays a critical role in the development of open-domain dialogue systems. Existing work are uncapable of providing an end-to-end and human-epistemic assessment dataset, while they only provide sub-metrics like coherence or the dialogues are conversed between annotators far from real user settings. In this paper, we release a large-scale dialogue quality assessment dataset (DiQAD), for automatically assessing open-domain dialogue quality. Specifically, we (1) establish the assessment criteria based on the dimensions conforming to human judgements on dialogue qualities, and (2) annotate large-scale dialogues that conversed between real users based on these annotation criteria, which contains around 100,000 dialogues. We conduct several experiments and report the performances of the baselines as the benchmark on DiQAD. The dataset is openly accessible at https://github.com/yukunZhao/Dataset_Dialogue_quality_evaluation.
Anthology ID:
2023.findings-emnlp.1010
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15128–15145
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.1010
DOI:
10.18653/v1/2023.findings-emnlp.1010
Bibkey:
Cite (ACL):
Yukun Zhao, Lingyong Yan, Weiwei Sun, Chong Meng, Shuaiqiang Wang, Zhicong Cheng, Zhaochun Ren, and Dawei Yin. 2023. DiQAD: A Benchmark Dataset for Open-domain Dialogue Quality Assessment. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 15128–15145, Singapore. Association for Computational Linguistics.
Cite (Informal):
DiQAD: A Benchmark Dataset for Open-domain Dialogue Quality Assessment (Zhao et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2023.findings-emnlp.1010.pdf