Focus-Constrained Attention Mechanism for CVAE-based Response Generation

Zhi Cui, Yanran Li, Jiayi Zhang, Jianwei Cui, Chen Wei, Bin Wang


Abstract
To model diverse responses for a given post, one promising way is to introduce a latent variable into Seq2Seq models. The latent variable is supposed to capture the discourse-level information and encourage the informativeness of target responses. However, such discourse-level information is often too coarse for the decoder to be utilized. To tackle it, our idea is to transform the coarse-grained discourse-level information into fine-grained word-level information. Specifically, we firstly measure the semantic concentration of corresponding target response on the post words by introducing a fine-grained focus signal. Then, we propose a focus-constrained attention mechanism to take full advantage of focus in well aligning the input to the target response. The experimental results demonstrate that by exploiting the fine-grained signal, our model can generate more diverse and informative responses compared with several state-of-the-art models.
Anthology ID:
2020.findings-emnlp.183
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Editors:
Trevor Cohn, Yulan He, Yang Liu
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2021–2030
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.183
DOI:
10.18653/v1/2020.findings-emnlp.183
Bibkey:
Cite (ACL):
Zhi Cui, Yanran Li, Jiayi Zhang, Jianwei Cui, Chen Wei, and Bin Wang. 2020. Focus-Constrained Attention Mechanism for CVAE-based Response Generation. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2021–2030, Online. Association for Computational Linguistics.
Cite (Informal):
Focus-Constrained Attention Mechanism for CVAE-based Response Generation (Cui et al., Findings 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/add_acl24_videos/2020.findings-emnlp.183.pdf
Code
 cuizhi555/Focus-Constrained-Attention-Mechanism-for-CVAE-based-Response-Generation