Efficient Tuning of Large Language Models for Knowledge-Grounded Dialogue Generation
Bo Zhang, Hui Ma, Dailin Li, Jian Ding, Jian Wang, Bo Xu, HongFei Lin
Abstract
Large language models (LLMs) demonstrate remarkable text comprehension and generation capabilities but often lack the ability to utilize up-to-date or domain-specific knowledge not included in their training data. To address this gap, we introduce KEDiT, an efficient method for fine-tuning LLMs for knowledge-grounded dialogue generation. KEDiT operates in two main phases. First, it employs an information bottleneck to compress retrieved knowledge into learnable parameters, retaining essential information while minimizing computational overhead. Second, a lightweight knowledge-aware adapter integrates these compressed knowledge vectors into the LLM during fine-tuning, updating less than 2% of the model parameters. The experimental results on the Wizard of Wikipedia and a newly constructed PubMed-Dialog dataset demonstrate that KEDiT excels in generating contextually relevant and informative responses, outperforming competitive baselines in automatic, LLM-based, and human evaluations. This approach effectively combines the strengths of pretrained LLMs with the adaptability needed for incorporating dynamic knowledge, presenting a scalable solution for fields such as medicine.1- Anthology ID:
- 2025.tacl-1.47
- Volume:
- Transactions of the Association for Computational Linguistics, Volume 13
- Month:
- Year:
- 2025
- Address:
- Cambridge, MA
- Venue:
- TACL
- SIG:
- Publisher:
- MIT Press
- Note:
- Pages:
- 1007–1031
- Language:
- URL:
- https://preview.aclanthology.org/ingest-eacl/2025.tacl-1.47/
- DOI:
- 10.1162/tacl.a.17
- Cite (ACL):
- Bo Zhang, Hui Ma, Dailin Li, Jian Ding, Jian Wang, Bo Xu, and HongFei Lin. 2025. Efficient Tuning of Large Language Models for Knowledge-Grounded Dialogue Generation. Transactions of the Association for Computational Linguistics, 13:1007–1031.
- Cite (Informal):
- Efficient Tuning of Large Language Models for Knowledge-Grounded Dialogue Generation (Zhang et al., TACL 2025)
- PDF:
- https://preview.aclanthology.org/ingest-eacl/2025.tacl-1.47.pdf