The LLM Language Network: A Neuroscientific Approach for Identifying Causally Task-Relevant Units
Badr AlKhamissi, Greta Tuckute, Antoine Bosselut, Martin Schrimpf
Abstract
Large language models (LLMs) exhibit remarkable capabilities on not just language tasks, but also various tasks that are not linguistic in nature, such as logical reasoning and social inference. In the human brain, neuroscience has identified a core language system that selectively and causally supports language processing. We here ask whether similar specialization for language emerges in LLMs. We identify language-selective units within 18 popular LLMs, using the same localization approach that is used in neuroscience. We then establish the causal role of these units by demonstrating that ablating LLM language-selective units – but not random units – leads to drastic deficits in language tasks. Correspondingly, language-selective LLM units are more aligned to brain recordings from the human language system than random units. Finally, we investigate whether our localization method extends to other cognitive domains: while we find specialized networks in some LLMs for reasoning and social capabilities, there are substantial differences among models. These findings provide functional and causal evidence for specialization in large language models, and highlight parallels with the functional organization in the brain.- Anthology ID:
- 2025.naacl-long.544
- Volume:
- Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
- Month:
- April
- Year:
- 2025
- Address:
- Albuquerque, New Mexico
- Editors:
- Luis Chiruzzo, Alan Ritter, Lu Wang
- Venue:
- NAACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 10887–10911
- Language:
- URL:
- https://preview.aclanthology.org/Author-page-Marten-During-lu/2025.naacl-long.544/
- DOI:
- Cite (ACL):
- Badr AlKhamissi, Greta Tuckute, Antoine Bosselut, and Martin Schrimpf. 2025. The LLM Language Network: A Neuroscientific Approach for Identifying Causally Task-Relevant Units. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 10887–10911, Albuquerque, New Mexico. Association for Computational Linguistics.
- Cite (Informal):
- The LLM Language Network: A Neuroscientific Approach for Identifying Causally Task-Relevant Units (AlKhamissi et al., NAACL 2025)
- PDF:
- https://preview.aclanthology.org/Author-page-Marten-During-lu/2025.naacl-long.544.pdf