Mihir Thalanki
2025
How to Fine-Tune Safely on a Budget: Model Adaptation Using Minimal Resources
Anh C. Pham
|
Mihir Thalanki
|
Michael Sun
|
Aditya Chaloo
|
Ankita Gupta
|
Tian Xia
|
Aditya Mate
|
Ehi Nosakhare
|
Soundararajan Srinivasan
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
Supervised fine-tuning (SFT) on benign data can paradoxically erode a language model’s safety alignment, a phenomenon known as catastrophic forgetting of safety behaviors. Although prior work shows that randomly adding safety examples can reduce harmful output, the principles that make certain examples more effective than others remain poorly understood. This paper investigates the hypothesis that the effectiveness of a safety example is governed by two key factors: its instruction-response behavior (e.g., refusal vs. explanation) and its semantic diversity across harm categories. We systematically evaluate sampling strategies based on these axes and find that structured, diversity-aware sampling significantly improves model safety. Our method reduces harmfulness by up to 41% while adding only 0.05% more data to the fine-tuning set.
Search
Fix author
Co-authors
- Aditya Chaloo 1
- Ankita Gupta 1
- Aditya Mate 1
- Ehi Nosakhare 1
- Anh C. Pham 1
- show all...