Context is Gold to find the Gold Passage: Evaluating and Training Contextual Document Embeddings

Max Conti, Manuel Faysse, Gautier Viaud, Antoine Bosselut, Celine Hudelot, Pierre Colombo


Abstract
A limitation of modern document retrieval embedding methods is that they typically encode passages (chunks) from the same documents independently, often overlooking crucial contextual information from the rest of the document that could greatly improve individual chunk representations.In this work, we introduce ConTEB (Context-aware Text Embedding Benchmark), a benchmark designed to evaluate retrieval models on their ability to leverage document-wide context. Our results show that state-of-the-art embedding models struggle in retrieval scenarios where context is required. To address this limitation, we propose InSeNT (In-sequence Negative Training), a novel contrastive post-training approach which combined with late chunking pooling enhances contextual representation learning while preserving computational efficiency. Our method significantly improves retrieval quality on ConTEB without sacrificing base model performance. We further find chunks embedded with our method are more robust to suboptimal chunking strategies and larger retrieval corpus sizes.We open-source all artifacts at https://github.com/illuin-tech/contextual-embeddings.
Anthology ID:
2025.emnlp-main.1150
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
22605–22619
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1150/
DOI:
Bibkey:
Cite (ACL):
Max Conti, Manuel Faysse, Gautier Viaud, Antoine Bosselut, Celine Hudelot, and Pierre Colombo. 2025. Context is Gold to find the Gold Passage: Evaluating and Training Contextual Document Embeddings. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 22605–22619, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Context is Gold to find the Gold Passage: Evaluating and Training Contextual Document Embeddings (Conti et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1150.pdf
Checklist:
 2025.emnlp-main.1150.checklist.pdf