Abstract
Information Retrieval using dense low-dimensional representations recently became popular and showed out-performance to traditional sparse-representations like BM25. However, no previous work investigated how dense representations perform with large index sizes. We show theoretically and empirically that the performance for dense representations decreases quicker than sparse representations for increasing index sizes. In extreme cases, this can even lead to a tipping point where at a certain index size sparse representations outperform dense representations. We show that this behavior is tightly connected to the number of dimensions of the representations: The lower the dimension, the higher the chance for false positives, i.e. returning irrelevant documents- Anthology ID:
- 2021.acl-short.77
- Volume:
- Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
- Month:
- August
- Year:
- 2021
- Address:
- Online
- Venues:
- ACL | IJCNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 605–611
- Language:
- URL:
- https://aclanthology.org/2021.acl-short.77
- DOI:
- 10.18653/v1/2021.acl-short.77
- Cite (ACL):
- Nils Reimers and Iryna Gurevych. 2021. The Curse of Dense Low-Dimensional Information Retrieval for Large Index Sizes. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 605–611, Online. Association for Computational Linguistics.
- Cite (Informal):
- The Curse of Dense Low-Dimensional Information Retrieval for Large Index Sizes (Reimers & Gurevych, ACL-IJCNLP 2021)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/2021.acl-short.77.pdf
- Data
- MS MARCO, Natural Questions