Abstract
Recent progress on neural approaches for language processing has triggered a resurgence of interest on building intelligent open-domain chatbots. However, even the state-of-the-art neural chatbots cannot produce satisfying responses for every turn in a dialog. A practical solution is to generate multiple response candidates for the same context, and then perform response ranking/selection to determine which candidate is the best. Previous work in response selection typically trains response rankers using synthetic data that is formed from existing dialogs by using a ground truth response as the single appropriate response and constructing inappropriate responses via random selection or using adversarial methods. In this work, we curated a dataset where responses from multiple response generators produced for the same dialog context are manually annotated as appropriate (positive) and inappropriate (negative). We argue that such training data better matches the actual use case examples, enabling the models to learn to rank responses effectively. With this new dataset, we conduct a systematic evaluation of state-of-the-art methods for response selection, and demonstrate that both strategies of using multiple positive candidates and using manually verified hard negative candidates can bring in significant performance improvement in comparison to using the adversarial training data, e.g., increase of 3% and 13% in Recall@1 score, respectively.- Anthology ID:
- 2022.sigdial-1.30
- Volume:
- Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
- Month:
- September
- Year:
- 2022
- Address:
- Edinburgh, UK
- Editors:
- Oliver Lemon, Dilek Hakkani-Tur, Junyi Jessy Li, Arash Ashrafzadeh, Daniel Hernández Garcia, Malihe Alikhani, David Vandyke, Ondřej Dušek
- Venue:
- SIGDIAL
- SIG:
- SIGDIAL
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 298–311
- Language:
- URL:
- https://aclanthology.org/2022.sigdial-1.30
- DOI:
- 10.18653/v1/2022.sigdial-1.30
- Cite (ACL):
- Behnam Hedayatnia, Di Jin, Yang Liu, and Dilek Hakkani-Tur. 2022. A Systematic Evaluation of Response Selection for Open Domain Dialogue. In Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 298–311, Edinburgh, UK. Association for Computational Linguistics.
- Cite (Informal):
- A Systematic Evaluation of Response Selection for Open Domain Dialogue (Hedayatnia et al., SIGDIAL 2022)
- PDF:
- https://preview.aclanthology.org/add_acl24_videos/2022.sigdial-1.30.pdf