Learning From Free-Text Human Feedback – Collect New Datasets Or Extend Existing Ones?

Dominic Petrak, Nafise Moosavi, Ye Tian, Nikolai Rozanov, Iryna Gurevych


Abstract
Learning from free-text human feedback is essential for dialog systems, but annotated data is scarce and usually covers only a small fraction of error types known in conversational AI. Instead of collecting and annotating new datasets from scratch, recent advances in synthetic dialog generation could be used to augment existing dialog datasets with the necessary annotations. However, to assess the feasibility of such an effort, it is important to know the types and frequency of free-text human feedback included in these datasets. In this work, we investigate this question for a variety of commonly used dialog datasets, including MultiWoZ, SGD, BABI, PersonaChat, Wizardsof-Wikipedia, and the human-bot split of the Self-Feeding Chatbot. Using our observations, we derive new taxonomies for the annotation of free-text human feedback in dialogs and investigate the impact of including such data in response generation for three SOTA language generation models, including GPT-2, LLAMA, and Flan-T5. Our findings provide new insights into the composition of the datasets examined, including error types, user response types, and the relations between them.
Anthology ID:
2023.emnlp-main.1011
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
16259–16279
Language:
URL:
https://preview.aclanthology.org/moar-dois/2023.emnlp-main.1011/
DOI:
10.18653/v1/2023.emnlp-main.1011
Bibkey:
Cite (ACL):
Dominic Petrak, Nafise Moosavi, Ye Tian, Nikolai Rozanov, and Iryna Gurevych. 2023. Learning From Free-Text Human Feedback – Collect New Datasets Or Extend Existing Ones?. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 16259–16279, Singapore. Association for Computational Linguistics.
Cite (Informal):
Learning From Free-Text Human Feedback – Collect New Datasets Or Extend Existing Ones? (Petrak et al., EMNLP 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/moar-dois/2023.emnlp-main.1011.pdf
Video:
 https://preview.aclanthology.org/moar-dois/2023.emnlp-main.1011.mp4