Mathis Lamarre
2026
Encoding and Decoding Language in the Brain with Language Models
Anuja Negi | Mathis Lamarre | Christine Tseng | Subba Reddy Oota
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 6: Tutorial Abstracts)
Anuja Negi | Mathis Lamarre | Christine Tseng | Subba Reddy Oota
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 6: Tutorial Abstracts)
This tutorial introduces brain-language model alignment and recent advances in scaling, multilingual brain encoding, brain-informed fine-tuning, and brain decoding with language models, including semantic reconstruction from brain data.
2022
Attention weights accurately predict language representations in the brain
Mathis Lamarre | Catherine Chen | Fatma Deniz
Findings of the Association for Computational Linguistics: EMNLP 2022
Mathis Lamarre | Catherine Chen | Fatma Deniz
Findings of the Association for Computational Linguistics: EMNLP 2022
In Transformer-based language models (LMs) the attention mechanism converts token embeddings into contextual embeddings that incorporate information from neighboring words. The resulting contextual hidden state embeddings have enabled highly accurate models of brain responses, suggesting that the attention mechanism constructs contextual embeddings that carry information reflected in language-related brain representations. However, it is unclear whether the attention weights that are used to integrate information across words are themselves related to language representations in the brain. To address this question we analyzed functional magnetic resonance imaging (fMRI) recordings of participants reading English language narratives. We provided the narrative text as input to two LMs (BERT and GPT-2) and extracted their corresponding attention weights. We then used encoding models to determine how well attention weights can predict recorded brain responses. We find that attention weights accurately predict brain responses in much of the frontal and temporal cortices. Our results suggest that the attention mechanism itself carries information that is reflected in brain representations. Moreover, these results indicate cortical areas in which context integration may occur.