Artem Cherepanov


2025

pdf bib
Steering LLM Reasoning Through Bias-Only Adaptation
Viacheslav Sinii | Alexey Gorbatovski | Artem Cherepanov | Boris Shaposhnikov | Nikita Balagansky | Daniil Gavrilov
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

We show that training a single d-dimensional steering vector per layer with reinforcement learning, while freezing all base weights, matches the accuracy of fully RL-tuned reasoning models on mathematical-reasoning tasks.On an 8 billion-parameter model this adds only ≈ 0.0016% additional parameters and reproduces performance across a range of base models and mathematical-reasoning benchmarks.These results tighten the upper bound on the parameter budget required for high-level chain-of-thought reasoning, indicating that millions of adapter weights are unnecessary.The minimal trainable footprint reduces optimizer memory and inter-GPU communication, lowering the overall cost of fine-tuning.Moreover, a logit-lens analysis shows that the learned vectors amplify coherent token directions, providing clearer insight into the model’s internal computations.