ImpliRet: Benchmarking the Implicit Fact Retrieval Challenge

Zeinab Sadat Taghavi, Ali Modarressi, Yunpu Ma, Hinrich Schuetze


Abstract
Retrieval systems are central to many NLP pipelines, but often rely on surface-level cues such as keyword overlap and lexical semantic similarity. To evaluate retrieval beyond these shallow signals, recent benchmarks introduce reasoning-heavy queries; however, they primarily shift the burden to query-side processing techniques – like prompting or multi-hop retrieval – that can help resolve complexity. In contrast, we present Impliret, a benchmark that shifts the reasoning challenge to document-side processing: The queries are simple, but relevance depends on facts stated implicitly in documents through temporal (e.g., resolving “two days ago”), arithmetic, and world knowledge relationships. We evaluate a range of sparse and dense retrievers, all of which struggle in this setting: the best nDCG@10 is only 14.91%. We also test whether long-context models can overcome this limitation. But even with a short context of only thirty documents, including the positive document, GPT-o4-mini scores only 55.54%, showing that document-side reasoning remains a challenge. Our codes are available at github.com/ZeinabTaghavi/IMPLIRET.
Anthology ID:
2025.emnlp-main.1685
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
33156–33178
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1685/
DOI:
Bibkey:
Cite (ACL):
Zeinab Sadat Taghavi, Ali Modarressi, Yunpu Ma, and Hinrich Schuetze. 2025. ImpliRet: Benchmarking the Implicit Fact Retrieval Challenge. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 33156–33178, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
ImpliRet: Benchmarking the Implicit Fact Retrieval Challenge (Taghavi et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1685.pdf
Checklist:
 2025.emnlp-main.1685.checklist.pdf