Feature Drift: How Fine-Tuning Repurposes Representations in LLMs

Andrey V. Galichin, Anton Korznikov, Alexey Dontsov, Oleg Rogov, Elena Tutubalina, Ivan Oseledets


Abstract
Fine-tuning LLMs introduces many important behaviors, such as instruction-following and safety alignment. This makes it crucial to study how fine-tuning changes models’ internal mechanisms. Sparse Autoencoders (SAEs) offer a powerful tool for interpreting neural networks by extracting concepts (features) represented in their activations. Previous work observed that SAEs trained on base models transfer effectively to instruction-tuned (chat) models, attributed to activation similarity. In this work, we propose *feature drift* as an alternative explanation: the feature space remains relevant, but the distribution of feature activations changes. In other words, fine-tuning recombines existing concepts rather than learning new ones. We validate this by showing base SAEs reconstruct both base and chat activations comparably despite systematic differences, with individual features exhibiting clear drift patterns. In a refusal behavior case study, we identify base SAE features that drift to activate on harmful instructions in chat models. Causal interventions using these features confirm that they mediate refusal. Our findings suggest that monitoring how existing features drift, rather than searching for entirely new features, may provide a more complete explanation of how fine-tuning changes model capabilities.
Anthology ID:
2026.findings-eacl.96
Volume:
Findings of the Association for Computational Linguistics: EACL 2026
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1878–1887
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.96/
DOI:
Bibkey:
Cite (ACL):
Andrey V. Galichin, Anton Korznikov, Alexey Dontsov, Oleg Rogov, Elena Tutubalina, and Ivan Oseledets. 2026. Feature Drift: How Fine-Tuning Repurposes Representations in LLMs. In Findings of the Association for Computational Linguistics: EACL 2026, pages 1878–1887, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Feature Drift: How Fine-Tuning Repurposes Representations in LLMs (Galichin et al., Findings 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.96.pdf
Checklist:
 2026.findings-eacl.96.checklist.pdf