EqualizeIR: Mitigating Linguistic Biases in Retrieval Models

Jiali Cheng, Hadi Amiri


Abstract
This study finds that existing information retrieval (IR) models show significant biases based on the linguistic complexity of input queries, performing well on linguistically simpler (or more complex) queries while underperforming on linguistically more complex (or simpler) queries.To address this issue, we propose EqualizeIR, a framework to mitigate linguistic biases in IR models. EqualizeIR uses a linguistically biased weak learner to capture linguistic biases in IR datasets and then trains a robust model by regularizing and refining its predictions using the biased weak learner. This approach effectively prevents the robust model from overfitting to specific linguistic patterns in data. We propose four approaches for developing linguistically-biased models. Extensive experiments on several datasets show that our method reduces performance disparities across linguistically simple and complex queries, while improving overall retrieval performance.
Anthology ID:
2025.naacl-short.75
Volume:
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
889–898
Language:
URL:
https://preview.aclanthology.org/moar-dois/2025.naacl-short.75/
DOI:
10.18653/v1/2025.naacl-short.75
Bibkey:
Cite (ACL):
Jiali Cheng and Hadi Amiri. 2025. EqualizeIR: Mitigating Linguistic Biases in Retrieval Models. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers), pages 889–898, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
EqualizeIR: Mitigating Linguistic Biases in Retrieval Models (Cheng & Amiri, NAACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/moar-dois/2025.naacl-short.75.pdf