Sparse, Dense, and Attentional Representations for Text Retrieval
Yi Luan, Jacob Eisenstein, Kristina Toutanova, Michael Collins
Abstract
Dual encoders perform retrieval by encoding documents and queries into dense low-dimensional vectors, scoring each document by its inner product with the query. We investigate the capacity of this architecture relative to sparse bag-of-words models and attentional neural networks. Using both theoretical and empirical analysis, we establish connections between the encoding dimension, the margin between gold and lower-ranked documents, and the document length, suggesting limitations in the capacity of fixed-length encodings to support precise retrieval of long documents. Building on these insights, we propose a simple neural model that combines the efficiency of dual encoders with some of the expressiveness of more costly attentional architectures, and explore sparse-dense hybrids to capitalize on the precision of sparse retrieval. These models outperform strong alternatives in large-scale retrieval.- Anthology ID:
- 2021.tacl-1.20
- Volume:
- Transactions of the Association for Computational Linguistics, Volume 9
- Month:
- Year:
- 2021
- Address:
- Cambridge, MA
- Editors:
- Brian Roark, Ani Nenkova
- Venue:
- TACL
- SIG:
- Publisher:
- MIT Press
- Note:
- Pages:
- 329–345
- Language:
- URL:
- https://aclanthology.org/2021.tacl-1.20
- DOI:
- 10.1162/tacl_a_00369
- Cite (ACL):
- Yi Luan, Jacob Eisenstein, Kristina Toutanova, and Michael Collins. 2021. Sparse, Dense, and Attentional Representations for Text Retrieval. Transactions of the Association for Computational Linguistics, 9:329–345.
- Cite (Informal):
- Sparse, Dense, and Attentional Representations for Text Retrieval (Luan et al., TACL 2021)
- PDF:
- https://preview.aclanthology.org/ingest-acl-2023-videos/2021.tacl-1.20.pdf