Experiential Semantic Information and Brain Alignment: Are Multimodal Models Better than Language Models?

Anna Bavaresco, Raquel Fernández


Abstract
A common assumption in Computational Linguistics is that text representations learnt by multimodal models are richer and more human-like than those by language-only models, as they are grounded in images or audio—similar to how human language is grounded in real-world experiences. However, empirical studies checking whether this is true are largely lacking. We address this gap by comparing word representations from contrastive multimodal models vs. language-only ones in the extent to which they capture experiential information—as defined by an existing norm-based ‘experiential model’—and align with human fMRI responses. Our results indicate that, surprisingly, language-only models are superior to multimodal ones in both respects. Additionally, they learn more unique brain-relevant semantic information beyond that shared with the experiential model. Overall, our study highlights the need to develop computational models that better integrate the complementary semantic information provided by multimodal data sources.
Anthology ID:
2025.conll-1.10
Volume:
Proceedings of the 29th Conference on Computational Natural Language Learning
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Gemma Boleda, Michael Roth
Venues:
CoNLL | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
141–155
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.conll-1.10/
DOI:
Bibkey:
Cite (ACL):
Anna Bavaresco and Raquel Fernández. 2025. Experiential Semantic Information and Brain Alignment: Are Multimodal Models Better than Language Models?. In Proceedings of the 29th Conference on Computational Natural Language Learning, pages 141–155, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Experiential Semantic Information and Brain Alignment: Are Multimodal Models Better than Language Models? (Bavaresco & Fernández, CoNLL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.conll-1.10.pdf