A Compare Aggregate Transformer for Understanding Document-grounded Dialogue

Longxuan Ma, Wei-Nan Zhang, Runxin Sun, Ting Liu


Abstract
Unstructured documents serving as external knowledge of the dialogues help to generate more informative responses. Previous research focused on knowledge selection (KS) in the document with dialogue. However, dialogue history that is not related to the current dialogue may introduce noise in the KS processing. In this paper, we propose a Compare Aggregate Transformer (CAT) to jointly denoise the dialogue context and aggregate the document information for response generation. We designed two different comparison mechanisms to reduce noise (before and during decoding). In addition, we propose two metrics for evaluating document utilization efficiency based on word overlap. Experimental results on the CMU_DoG dataset show that the proposed CAT model outperforms the state-of-the-art approach and strong baselines.
Anthology ID:
2020.findings-emnlp.122
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Editors:
Trevor Cohn, Yulan He, Yang Liu
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1358–1367
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.122
DOI:
10.18653/v1/2020.findings-emnlp.122
Bibkey:
Cite (ACL):
Longxuan Ma, Wei-Nan Zhang, Runxin Sun, and Ting Liu. 2020. A Compare Aggregate Transformer for Understanding Document-grounded Dialogue. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1358–1367, Online. Association for Computational Linguistics.
Cite (Informal):
A Compare Aggregate Transformer for Understanding Document-grounded Dialogue (Ma et al., Findings 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2020.findings-emnlp.122.pdf