Fine-grained Knowledge Enhancement for Retrieval-Augmented Generation

Jingxuan Han, Zhendong Mao, Yi Liu, Yexuan Che, Zheren Fu, Quan Wang


Abstract
Retrieval-augmented generation (RAG) effectively mitigates hallucinations in large language models (LLMs) by filling knowledge gaps with retrieved external information. Most existing studies primarily retrieve knowledge documents based on semantic similarity to assist in answering questions but ignore the fine-grained necessary information within documents. In this paper, we propose a novel fine-grained knowledge enhancement method (FKE) for RAG, where fine-grained knowledge primarily includes sentence-level information easily overlooked in the document-based retrieval process. Concretely, we create a disentangled Chain-of-Thought prompting procedure to retrieve fine-grained knowledge from the external knowledge corpus. Then we develop a decoding enhancement strategy to constrain the document-based decoding process using fine-grained knowledge, thereby facilitating more accurate generated answers. Given an existing RAG pipeline, our method could be applied in a plug-and-play manner to enhance its performance with no additional modules or training process. Extensive experiments verify the effectiveness and generality of our method.
Anthology ID:
2025.findings-acl.522
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10031–10044
Language:
URL:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.findings-acl.522/
DOI:
10.18653/v1/2025.findings-acl.522
Bibkey:
Cite (ACL):
Jingxuan Han, Zhendong Mao, Yi Liu, Yexuan Che, Zheren Fu, and Quan Wang. 2025. Fine-grained Knowledge Enhancement for Retrieval-Augmented Generation. In Findings of the Association for Computational Linguistics: ACL 2025, pages 10031–10044, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Fine-grained Knowledge Enhancement for Retrieval-Augmented Generation (Han et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.findings-acl.522.pdf