DialogUSR: Complex Dialogue Utterance Splitting and Reformulation for Multiple Intent Detection

Haoran Meng, Zheng Xin, Tianyu Liu, Zizhen Wang, He Feng, Binghuai Lin, Xuemin Zhao, Yunbo Cao, Zhifang Sui


Abstract
While interacting with chatbots, users may elicit multiple intents in a single dialogue utterance. Instead of training a dedicated multi-intent detection model, we propose DialogUSR, a dialogue utterance splitting and reformulation task that first splits multi-intent user query into several single-intent sub-queries and then recovers all the coreferred and omitted information in the sub-queries. DialogUSR can serve as a plug-in and domain-agnostic module that empowers the multi-intent detection for the deployed chatbots with minimal efforts. We collect a high-quality naturally occurring dataset that covers 23 domains with a multi-step crowd-souring procedure. To benchmark the proposed dataset, we propose multiple action-based generative models that involve end-to-end and two-stage training, and conduct in-depth analyses on the pros and cons of the proposed baselines.
Anthology ID:
2022.findings-emnlp.234
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3214–3229
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.234
DOI:
Bibkey:
Cite (ACL):
Haoran Meng, Zheng Xin, Tianyu Liu, Zizhen Wang, He Feng, Binghuai Lin, Xuemin Zhao, Yunbo Cao, and Zhifang Sui. 2022. DialogUSR: Complex Dialogue Utterance Splitting and Reformulation for Multiple Intent Detection. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 3214–3229, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
DialogUSR: Complex Dialogue Utterance Splitting and Reformulation for Multiple Intent Detection (Meng et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2022.findings-emnlp.234.pdf