SAEs Are Good for Steering – If You Select the Right Features

Dana Arad, Aaron Mueller, Yonatan Belinkov


Abstract
Sparse Autoencoders (SAEs) have been proposed as an unsupervised approach to learn a decomposition of a model’s latent space. This enables useful applications, such as fine-grained steering of model outputs without requiring labeled data. Current steering methods identify SAE features to target by analyzing the input tokens that activate them. However, recent work has highlighted that activations alone do not fully describe the effect of a feature on the model’s output. In this work we draw a distinction between two types of features: input features, which mainly capture patterns in the model’s input, and output features, those that have a human-understandable effect on the model’s output. We propose input and output scores to characterize and locate these types of features, and show that high values for both scores rarely co-occur in the same features. These findings have practical implications: After filtering out features with low output scores, steering with SAEs results in a 2–3x improvement, matching the performance of existing supervised methods.
Anthology ID:
2025.emnlp-main.519
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10252–10270
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.519/
DOI:
Bibkey:
Cite (ACL):
Dana Arad, Aaron Mueller, and Yonatan Belinkov. 2025. SAEs Are Good for Steering – If You Select the Right Features. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 10252–10270, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
SAEs Are Good for Steering – If You Select the Right Features (Arad et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.519.pdf
Checklist:
 2025.emnlp-main.519.checklist.pdf