Vision-Language Models Align with Human Neural Representations in Concept Processing
Anna Bavaresco, Marianne De Heer Kloots, Sandro Pezzelle, Raquel Fernández
Abstract
Recent studies suggest that transformer-based vision-language models (VLMs) capture the multimodality of concept processing in the human brain. However, a systematic evaluation exploring different types of VLM architectures and the role played by visual and textual context is still lacking. Here, we analyse multiple VLMs employing different strategies to integrate visual and textual modalities, along with language-only counterparts. We measure the alignment between concept representations by models and existing (fMRI) brain responses to concept words presented in two experimental conditions, where either visual (pictures) or textual (sentences) context is provided. Our results reveal that VLMs outperform the language-only counterparts in both experimental conditions. However, controlled ablation studies show that only for some VLMs, such as LXMERT and IDEFICS2, brain alignment stems from genuinely learning more human-like concepts during _pretraining_, while others are highly sensitive to the context provided at _inference_. Additionally, we find that vision-language encoders are more brain-aligned than more recent, generative VLMs. Altogether, our study shows that VLMs align with human neural representations in concept processing, while highlighting differences among architectures. We open-source code and materials to reproduce our experiments at: [https://github.com/dmg-illc/vl-concept-processing](https://github.com/dmg-illc/vl-concept-processing).- Anthology ID:
- 2026.eacl-long.150
- Volume:
- Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
- Month:
- March
- Year:
- 2026
- Address:
- Rabat, Morocco
- Editors:
- Vera Demberg, Kentaro Inui, Lluís Marquez
- Venue:
- EACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 3255–3274
- Language:
- URL:
- https://preview.aclanthology.org/ingest-eacl/2026.eacl-long.150/
- DOI:
- Cite (ACL):
- Anna Bavaresco, Marianne De Heer Kloots, Sandro Pezzelle, and Raquel Fernández. 2026. Vision-Language Models Align with Human Neural Representations in Concept Processing. In Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3255–3274, Rabat, Morocco. Association for Computational Linguistics.
- Cite (Informal):
- Vision-Language Models Align with Human Neural Representations in Concept Processing (Bavaresco et al., EACL 2026)
- PDF:
- https://preview.aclanthology.org/ingest-eacl/2026.eacl-long.150.pdf