Zirui He
2025
SAE-SSV: Supervised Steering in Sparse Representation Spaces for Reliable Control of Language Models
Zirui He
|
Mingyu Jin
|
Bo Shen
|
Ali Payani
|
Yongfeng Zhang
|
Mengnan Du
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but controlling their behavior reliably remains challenging, especially in open-ended generation settings. This paper introduces a novel supervised steering approach that operates in sparse, interpretable representation spaces. We employ sparse autoencoders (SAEs) to obtain sparse latent representations that aim to disentangle semantic attributes from model activations. Then we train linear classifiers to identify a small subspace of task-relevant dimensions in latent representations. Finally, we learn supervised steering vectors constrained to this subspace, optimized to align with target behaviors. Experiments across sentiment, truthfulness, and politics polarity steering tasks with multiple LLMs demonstrate that our supervised steering vectors achieve higher success rates with minimal degradation in generation quality compared to existing methods. Further analysis reveals that a notably small subspace is sufficient for effective steering, enabling more targeted and interpretable interventions.
2024
Mitigating Shortcuts in Language Models with Soft Label Encoding
Zirui He
|
Huiqi Deng
|
Haiyan Zhao
|
Ninghao Liu
|
Mengnan Du
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Recent research has shown that large language models rely on spurious correlations in the data for natural language understanding (NLU) tasks. In this work, we aim to answer the following research question: Can we reduce spurious correlations by modifying the ground truth labels of the training data? Specifically, we propose a simple yet effective debiasing framework, named Soft Label Encoding (SoftLE). First, we train a teacher model to quantify each sample’s degree of relying on shortcuts. Then, we encode this shortcut degree into a dummy class and use it to smooth the original ground truth labels, generating soft labels. These soft labels are used to train a more robust student model that reduces spurious correlations between shortcut features and certain classes. Extensive experiments on two NLU benchmark tasks via two language models demonstrate that SoftLE significantly improves out-of-distribution generalization while maintaining satisfactory in-distribution accuracy. Our code is available at https://github.com/ZiruiHE99/sle