The Unintended Trade-off of AI Alignment: Balancing Hallucination Mitigation and Safety in LLMs

Omar Mahmoud, Ali Khalil, Thommen George Karimpanal, Buddhika Laknath Semage, Santu Rana


Abstract
Hallucination in large language models (LLMs) has been widely studied in recent years, with progress in both detection and mitigation aimed at improving truthfulness. Yet, a critical side effect remains largely overlooked: enhancing truthfulness can negatively impact safety alignment. In this paper, we investigate this trade-off and show that increasing factual accuracy often comes at the cost of weakened refusal behavior. Our analysis reveals that this arises from overlapping components in the model that simultaneously encode hallucination and refusal information, leading alignment methods to suppress factual knowledge unintentionally. We further examine how fine-tuning on benign datasets, even when curated for safety, can degrade alignment for the same reason. To address this, we propose a method that disentangles refusal-related features from hallucination features using sparse autoencoders, and preserves refusal behavior during fine-tuning through subspace orthogonalization. This approach prevents hallucinations from increasing while maintaining safety alignment.We evaluate our method on commonsense reasoning tasks and harmful benchmarks (AdvBench and StrongReject). Results demonstrate that our approach preserves refusal behavior and task utility, mitigating the trade-off between truthfulness and safety.
Anthology ID:
2026.findings-eacl.53
Volume:
Findings of the Association for Computational Linguistics: EACL 2026
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1017–1037
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.53/
DOI:
Bibkey:
Cite (ACL):
Omar Mahmoud, Ali Khalil, Thommen George Karimpanal, Buddhika Laknath Semage, and Santu Rana. 2026. The Unintended Trade-off of AI Alignment: Balancing Hallucination Mitigation and Safety in LLMs. In Findings of the Association for Computational Linguistics: EACL 2026, pages 1017–1037, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
The Unintended Trade-off of AI Alignment: Balancing Hallucination Mitigation and Safety in LLMs (Mahmoud et al., Findings 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.53.pdf
Checklist:
 2026.findings-eacl.53.checklist.pdf