Yuan You
2022
McQueen: a Benchmark for Multimodal Conversational Query Rewrite
Yifei Yuan
|
Chen Shi
|
Runze Wang
|
Liyi Chen
|
Feijun Jiang
|
Yuan You
|
Wai Lam
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
The task of query rewrite aims to convert an in-context query to its fully-specified version where ellipsis and coreference are completed and referred-back according to the history context. Although much progress has been made, less efforts have been paid to real scenario conversations that involve drawing information from more than one modalities. In this paper, we propose the task of multimodal conversational query rewrite (McQR), which performs query rewrite under the multimodal visual conversation setting. We collect a large-scale dataset named McQueen based on manual annotation, which contains 15k visual conversations and over 80k queries where each one is associated with a fully-specified rewrite version. In addition, for entities appearing in the rewrite, we provide the corresponding image box annotation. We then use the McQueen dataset to benchmark a state-of-the-art method for effectively tackling the McQR task, which is based on a multimodal pre-trained model with pointer generator. Extensive experiments are performed to demonstrate the effectiveness of our model on this task.
History-Aware Hierarchical Transformer for Multi-session Open-domain Dialogue System
Tong Zhang
|
Yong Liu
|
Boyang Li
|
Zhiwei Zeng
|
Pengwei Wang
|
Yuan You
|
Chunyan Miao
|
Lizhen Cui
Findings of the Association for Computational Linguistics: EMNLP 2022
With the evolution of pre-trained language models, current open-domain dialogue systems have achieved great progress in conducting one-session conversations. In contrast, Multi-Session Conversation (MSC), which consists of multiple sessions over a long term with the same user, is under-investigated. In this paper, we propose History-Aware Hierarchical Transformer (HAHT) for multi-session open-domain dialogue. HAHT maintains a long-term memory of history conversations and utilizes history information to understand current conversation context and generate well-informed and context-relevant responses. Specifically, HAHT first encodes history conversation sessions hierarchically into a history memory. Then, HAHT leverages historical information to facilitate the understanding of the current conversation context by encoding the history memory together with the current context with attention-based mechanisms. Finally, to explicitly utilize historical information, HAHT uses a history-aware response generator that switches between a generic vocabulary and a history-aware vocabulary. Experimental results on a large-scale MSC dataset suggest that the proposed HAHT model consistently outperforms baseline models. Human evaluation results support that HAHT generates more human-like, context-relevant, and history-relevant responses than baseline models.
Search
Co-authors
- Yifei Yuan 1
- Chen Shi 1
- Runze Wang 1
- Liyi Chen 1
- Feijun Jiang 1
- show all...