Seeing Through Words, Speaking Through Pixels: Deep Representational Alignment Between Vision and Language Models

Zoe Wanying He, Sean Trott, Meenakshi Khosla


Abstract
Recent studies show that deep vision-only and language-only models—trained on disjoint modalities—nonetheless project their inputs into a partially aligned representational space. Yet we still lack a clear picture of _where_ in each network this convergence emerges, _what_ visual or linguistic cues support it, _whether_ it captures human preferences in many-to-many image-text scenarios, and _how_ aggregating exemplars of the same concept affects alignment. Here, we systematically investigate these questions. We find that alignment peaks in mid-to-late layers of both model types, reflecting a shift from modality-specific to conceptually shared representations. This alignment is robust to appearance-only changes but collapses when semantics are altered (e.g., object removal or word-order scrambling), highlighting that the shared code is truly semantic. Moving beyond the one-to-one image-caption paradigm, a forced-choice “Pick-a-Pic” task shows that human preferences for image-caption matches are mirrored in the embedding spaces across all vision-language model pairs. This pattern holds bidirectionally when multiple captions correspond to a single image, demonstrating that models capture fine-grained semantic distinctions akin to human judgments. Surprisingly, averaging embeddings across exemplars amplifies alignment rather than blurring detail. Together, our results demonstrate that unimodal networks converge on a shared semantic code that aligns with human judgments and strengthens with exemplar aggregation.
Anthology ID:
2025.emnlp-main.1806
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
35645–35660
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1806/
DOI:
Bibkey:
Cite (ACL):
Zoe Wanying He, Sean Trott, and Meenakshi Khosla. 2025. Seeing Through Words, Speaking Through Pixels: Deep Representational Alignment Between Vision and Language Models. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 35645–35660, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Seeing Through Words, Speaking Through Pixels: Deep Representational Alignment Between Vision and Language Models (He et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1806.pdf
Checklist:
 2025.emnlp-main.1806.checklist.pdf