RPDR: A Round-trip Prediction-Based Data Augmentation Framework for Long-Tail Question Answering

Yiming Zhang, Siyue Zhang, Junbo Zhao, Chen Zhao


Abstract
Long-tail question answering presents significant challenges for large language models (LLMs) due to their limited ability to acquire and accurately recall less common knowledge. Retrieval-augmented generation (RAG) systems have shown great promise in mitigating this limitation by integrating external retrieval mechanisms. However, dense retrieval models often face the same difficulties when generalizing to rare or niche knowledge. In this study, we introduce RPDR, a novel data augmentation framework that selects high-quality easy-to-learn training data, to enhance dense retrievers. Our approach is built around three core components: synthetic data generation, data selection with Round-Trip prediction to identify easy-to-learn instances, and retriever training with these instances. We evaluate RPDR on two long-tail retrieval benchmarks, PopQA and EntityQuestion, demonstrating substantial improvements over existing retrievers like BM25 and Contriver, especially on extremely long-tail categories. We identify the strengths and limitations of RPDR through detailed human analysis and propose a dynamic routing mechanism to dynamically route queries to specialized retrieval modules to further improve retrieval performance.
Anthology ID:
2025.emnlp-main.1119
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
22009–22023
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1119/
DOI:
Bibkey:
Cite (ACL):
Yiming Zhang, Siyue Zhang, Junbo Zhao, and Chen Zhao. 2025. RPDR: A Round-trip Prediction-Based Data Augmentation Framework for Long-Tail Question Answering. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 22009–22023, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
RPDR: A Round-trip Prediction-Based Data Augmentation Framework for Long-Tail Question Answering (Zhang et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1119.pdf
Checklist:
 2025.emnlp-main.1119.checklist.pdf