Shihao Liu
2025
Utility-Focused LLM Annotation for Retrieval and Retrieval-Augmented Generation
Hengran Zhang
|
Minghao Tang
|
Keping Bi
|
Jiafeng Guo
|
Shihao Liu
|
Daiting Shi
|
Dawei Yin
|
Xueqi Cheng
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
This paper explores the use of large language models (LLMs) for annotating document utility in training retrieval and retrieval-augmented generation (RAG) systems, aiming to reduce dependence on costly human annotations. We address the gap between retrieval relevance and generative utility by employing LLMs to annotate document utility. To effectively utilize multiple positive samples per query, we introduce a novel loss that maximizes their summed marginal likelihood. Using the Qwen-2.5-32B model, we annotate utility on the MS MARCO dataset and conduct retrieval experiments on MS MARCO and BEIR, as well as RAG experiments on MS MARCO QA, NQ, and HotpotQA. Our results show that LLM-generated annotations enhance out-of-domain retrieval performance and improve RAG outcomes compared to models trained solely on human annotations or downstream QA metrics. Furthermore, combining LLM annotations with just 20% of human labels achieves performance comparable to using full human annotations. Our study offers a comprehensive approach to utilizing LLM annotations for initializing QA systems on new corpora.
Search
Fix author
Co-authors
- Keping Bi 1
- Xueqi Cheng (程学旗) 1
- Jiafeng Guo (嘉丰 郭) 1
- Daiting Shi 1
- Minghao Tang 1
- show all...