Analyzing (In)Abilities of SAEs via Formal Languages

Abhinav Menon, Manish Shrivastava, David Krueger, Ekdeep Singh Lubana


Abstract
Autoencoders have been used for finding interpretable and disentangled features underlying neural network representations in both image and text domains. While the efficacy and pitfalls of such methods are well-studied in vision, there is a lack of corresponding results, both qualitative and quantitative, for the text domain. We aim to address this gap by training sparse autoencoders (SAEs) on a synthetic testbed of formal languages. Specifically, we train SAEs on the hidden representations of models trained on formal languages (Dyck-2, Expr, and English PCFG) under a wide variety of hyperparameter settings, finding interpretable latents often emerge in the features learned by our SAEs. However, similar to vision, we find performance turns out to be highly sensitive to inductive biases of the training pipeline. Moreover, we show latents correlating to certain features of the input do not always induce a causal impact on model’s computation. We thus argue that causality has to become a central target in SAE training: learning of causal features should be incentivized from the ground-up. Motivated by this, we propose and perform preliminary investigations for an approach that promotes learning of causally relevant features in our formal language setting.
Anthology ID:
2025.naacl-long.249
Volume:
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4837–4862
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.naacl-long.249/
DOI:
Bibkey:
Cite (ACL):
Abhinav Menon, Manish Shrivastava, David Krueger, and Ekdeep Singh Lubana. 2025. Analyzing (In)Abilities of SAEs via Formal Languages. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 4837–4862, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Analyzing (In)Abilities of SAEs via Formal Languages (Menon et al., NAACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.naacl-long.249.pdf