Pruning the Paradox: How CLIP’s Most Informative Heads Enhance Performance While Amplifying Bias

Avinash Madasu, Vasudev Lal, Phillip Howard


Abstract
CLIP is one of the most popular foundation models and is heavily used for many vision-language tasks, yet little is known about its inner workings. As CLIP is increasingly deployed in real-world applications, it is becoming even more critical to understand its limitations and embedded social biases to mitigate potentially harmful downstream consequences. However, the question of what internal mechanisms drive both the impressive capabilities as well as problematic shortcomings of CLIP has largely remained unanswered. To bridge this gap, we study the conceptual consistency of text descriptions for attention heads in CLIP-like models. Specifically, we propose Concept Consistency Score (CCS), a novel interpretability metric that measures how consistently individual attention heads in CLIP models align with specific concepts. Our soft-pruning experiments reveal that high CCS heads are critical for preserving model performance, as pruning them leads to a significantly larger performance drop than pruning random or low CCS heads. Notably, we find that high CCS heads capture essential concepts and play a key role in out-of-domain detection, concept-specific reasoning, and video-language understanding. Moreover, we prove that high CCS heads learn spurious correlations which amplify social biases. These results position CCS as a powerful interpretability metric exposing the paradox of performance and social biases in CLIP models.
Anthology ID:
2025.emnlp-main.229
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4611–4626
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.229/
DOI:
Bibkey:
Cite (ACL):
Avinash Madasu, Vasudev Lal, and Phillip Howard. 2025. Pruning the Paradox: How CLIP’s Most Informative Heads Enhance Performance While Amplifying Bias. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 4611–4626, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Pruning the Paradox: How CLIP’s Most Informative Heads Enhance Performance While Amplifying Bias (Madasu et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.229.pdf
Checklist:
 2025.emnlp-main.229.checklist.pdf