Lexical Knowledge Internalization for Neural Dialog Generation

Zhiyong Wu, Wei Bi, Xiang Li, Lingpeng Kong, Ben Kao


Abstract
We propose knowledge internalization (KI), which aims to complement the lexical knowledge into neural dialog models. Instead of further conditioning the knowledge-grounded dialog (KGD) models on externally retrieved knowledge, we seek to integrate knowledge about each input token internally into the model’s parameters. To tackle the challenge due to the large scale of lexical knowledge, we adopt the contrastive learning approach and create an effective token-level lexical knowledge retriever that requires only weak supervision mined from Wikipedia. We demonstrate the effectiveness and general applicability of our approach on various datasets and diversified model structures.
Anthology ID:
2022.acl-long.547
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7945–7958
Language:
URL:
https://aclanthology.org/2022.acl-long.547
DOI:
10.18653/v1/2022.acl-long.547
Bibkey:
Cite (ACL):
Zhiyong Wu, Wei Bi, Xiang Li, Lingpeng Kong, and Ben Kao. 2022. Lexical Knowledge Internalization for Neural Dialog Generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7945–7958, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Lexical Knowledge Internalization for Neural Dialog Generation (Wu et al., ACL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2022.acl-long.547.pdf
Code
 lividwo/ki
Data
DailyDialogWizard of Wikipedia