Fine-Tuning Lowers Safety and Disrupts Evaluation Consistency

Kathleen C. Fraser, Hillary Dawkins, Isar Nejadgholi, Svetlana Kiritchenko


Abstract
Fine-tuning a general-purpose large language model (LLM) for a specific domain or task has become a routine procedure for ordinary users. However, fine-tuning is known to remove the safety alignment features of the model, even when the fine-tuning data does not contain any harmful content. We consider this to be a critical failure mode of LLMs due to the widespread uptake of fine-tuning, combined with the benign nature of the “attack”. Most well-intentioned developers are likely unaware that they are deploying an LLM with reduced safety. On the other hand, this known vulnerability can be easily exploited by malicious actors intending to bypass safety guardrails. To make any meaningful progress in mitigating this issue, we first need reliable and reproducible safety evaluations. In this work, we investigate how robust a safety benchmark is to trivial variations in the experimental procedure, and the stochastic nature of LLMs. Our initial experiments expose surprising variance in the results of the safety evaluation, even when seemingly inconsequential changes are made to the fine-tuning setup. Our observations have serious implications for how researchers in this field should report results to enable meaningful comparisons in the future.
Anthology ID:
2025.llmsec-1.10
Volume:
Proceedings of the The First Workshop on LLM Security (LLMSEC)
Month:
August
Year:
2025
Address:
Vienna, Austria
Editor:
Jekaterina Novikova
Venues:
LLMSEC | WS
SIG:
SIGSEC
Publisher:
Association for Computational Linguistics
Note:
Pages:
129–141
Language:
URL:
https://preview.aclanthology.org/corrections-2025-08/2025.llmsec-1.10/
DOI:
Bibkey:
Cite (ACL):
Kathleen C. Fraser, Hillary Dawkins, Isar Nejadgholi, and Svetlana Kiritchenko. 2025. Fine-Tuning Lowers Safety and Disrupts Evaluation Consistency. In Proceedings of the The First Workshop on LLM Security (LLMSEC), pages 129–141, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Fine-Tuning Lowers Safety and Disrupts Evaluation Consistency (Fraser et al., LLMSEC 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/corrections-2025-08/2025.llmsec-1.10.pdf
Supplementarymaterial:
 2025.llmsec-1.10.SupplementaryMaterial.txt