Analyzing the Inner Workings of Transformers in Compositional Generalization

Ryoma Kumon, Hitomi Yanaka


Abstract
The compositional generalization abilities of neural models have been sought after for human-like linguistic competence.The popular method to evaluate such abilities is to assess the models’ input-output behavior.However, that does not reveal the internal mechanisms, and the underlying competence of such models in compositional generalization remains unclear.To address this problem, we explore the inner workings of a Transformer model byfinding an existing subnetwork that contributes to the generalization performance and by performing causal analyses on how the model utilizes syntactic features.We find that the model depends on syntactic features to output the correct answer, but that the subnetwork with much better generalization performance than the whole model relies on a non-compositional algorithm in addition to the syntactic features.We also show that the subnetwork improves its generalization performance relatively slowly during the training compared to the in-distribution one, and the non-compositional solution is acquired in the early stages of the training.
Anthology ID:
2025.naacl-long.432
Volume:
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8529–8540
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.naacl-long.432/
DOI:
Bibkey:
Cite (ACL):
Ryoma Kumon and Hitomi Yanaka. 2025. Analyzing the Inner Workings of Transformers in Compositional Generalization. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 8529–8540, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Analyzing the Inner Workings of Transformers in Compositional Generalization (Kumon & Yanaka, NAACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.naacl-long.432.pdf