Taeyoun Kim
2025
Mitigating Bias in RAG: Controlling the Embedder
Taeyoun Kim
|
Jacob Mitchell Springer
|
Aditi Raghunathan
|
Maarten Sap
Findings of the Association for Computational Linguistics: ACL 2025
In retrieval augmented generation (RAG) systems, each individual component—the LLM, embedder, and corpus—could introduce biases in the form of skews towards certain genders or political leanings. In this work, we study the conflict between biases of each component and their relationship to the overall bias of the RAG system, which we call bias conflict. Examining both gender and political biases as case studies, we show that bias conflict can be characterized through a linear relationship among components despite its complexity. Through fine-tuning, we demonstrate how to control the bias of the embedder while maintaining utility and reveal the importance of reverse-biasing the embedder to mitigate bias in the overall system, Additionally, we find that LLMs and tasks exhibit varying sensitivities to bias, a crucial factor to consider for debiasing. Our results underscore that a fair RAG system can be better achieved by carefully controlling the bias of the embedder rather than increasing its fairness.