Equipping Retrieval-Augmented Large Language Models with Document Structure Awareness

Lingnan Xu, Chong Feng, Kaiyuan Zhang, Liu Zhengyong, Wenqiang Xu, Fanqing Meng


Abstract
While large language models (LLMs) demonstrate impressive capabilities, their reliance on parametric knowledge often leads to factual inaccuracies. Retrieval-Augmented Generation (RAG) mitigates this by leveraging external documents, yet existing approaches treat retrieved passages as isolated chunks, ignoring valuable structure that is crucial for document organization. Motivated by this gap, we propose Retrieve-DocumentRoute-Read (RDR2), a novel framework that explicitly incorporates structural information throughout the RAG process. RDR2 employs an LLM-based router to dynamically navigate document structure trees, jointly evaluating content relevance and hierarchical relationships to assemble optimal evidence. Our key innovation lies in formulating document routing as a trainable task, with automatic action curation and structure-aware passage selection inspired by human reading strategies. Through comprehensive evaluation on five challenging datasets, RDR2 achieves state-of-the-art performance, demonstrating that explicit structural awareness significantly enhances RAG systems’ ability to acquire and utilize knowledge, particularly in complex scenarios requiring multi-document synthesis.
Anthology ID:
2025.findings-emnlp.1339
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
24608–24631
Language:
URL:
https://preview.aclanthology.org/ingest-luhme/2025.findings-emnlp.1339/
DOI:
10.18653/v1/2025.findings-emnlp.1339
Bibkey:
Cite (ACL):
Lingnan Xu, Chong Feng, Kaiyuan Zhang, Liu Zhengyong, Wenqiang Xu, and Fanqing Meng. 2025. Equipping Retrieval-Augmented Large Language Models with Document Structure Awareness. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 24608–24631, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Equipping Retrieval-Augmented Large Language Models with Document Structure Awareness (Xu et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-luhme/2025.findings-emnlp.1339.pdf
Checklist:
 2025.findings-emnlp.1339.checklist.pdf