Jingxin Shi
2025
Model Surgery: Modulating LLM’s Behavior Via Simple Parameter Editing
Huanqian Wang
|
Yang Yue
|
Rui Lu
|
Jingxin Shi
|
Andrew Zhao
|
Shenzhi Wang
|
Shiji Song
|
Gao Huang
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Large Language Models (LLMs) have demonstrated great potential as generalist assistants, showcasing powerful task understanding and problem-solving capabilities. To deploy LLMs as AI assistants, it is crucial that these models exhibit desirable behavioral traits, such as non-toxicity and resilience against jailbreak attempts. Current approaches for detoxification or preventing jailbreaking usually involve Supervised Fine-Tuning (SFT) or Reinforcement Learning from Human Feedback (RLHF), which requires finetuning billions of parameters through gradient descent with substantial computational cost. Furthermore, models modified through SFT and RLHF may deviate from the pretrained models, potentially leading to a degradation in foundational LLM capabilities. In this paper, we observe that surprisingly, directly editing a small subset of parameters can effectively modulate specific behaviors of LLMs, such as detoxification and resistance to jailbreaking, with only inference-level computational resources. Experiments demonstrate that in the detoxification task, our approach achieves reductions of up to 90.0% in toxicity on the RealToxicityPrompts dataset and 49.2% on ToxiGen, while maintaining the LLM’s general capabilities in areas such as common sense, question answering, and mathematics.
Search
Fix data
Co-authors
- Gao Huang 1
- Rui Lu 1
- Shiji Song 1
- Huanqian Wang 1
- Shenzhi Wang 1
- show all...