Where Do LLMs Compose Meaning? A Layerwise Analysis of Compositional Robustness

Nura Aljaafari, Danilo Carvalho, Andre Freitas


Abstract
Understanding how large language models (LLMs) process compositional linguistic structures is integral to enhancing their reliability and interpretability. We present Constituent-Aware Pooling (CAP), a methodology grounded in compositionality, mechanistic interpretability, and information theory that intervenes in model activations by pooling token representations into linguistic constituents at various layers. Experiments across eight models (124M-8B parameters) on inverse definition modelling, hypernym and synonym prediction reveal that semantic composition is not localised to specific layers but distributed across network depth. Performance degrades substantially under constituent-based pooling, particularly in early and middle layers, with larger models showing greater sensitivity. We propose an information-theoretic interpretation: transformers’ training objectives incentivise deferred integration to maximise token-level throughput, resulting in fragmented rather than localised composition. These findings highlight fundamental architectural and training constraints requiring specialised approaches to encourage robust compositional processing.
Anthology ID:
2026.eacl-long.214
Volume:
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4622–4646
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.eacl-long.214/
DOI:
Bibkey:
Cite (ACL):
Nura Aljaafari, Danilo Carvalho, and Andre Freitas. 2026. Where Do LLMs Compose Meaning? A Layerwise Analysis of Compositional Robustness. In Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4622–4646, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Where Do LLMs Compose Meaning? A Layerwise Analysis of Compositional Robustness (Aljaafari et al., EACL 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.eacl-long.214.pdf