Unlearning vs. Obfuscation: Are We Truly Removing Knowledge?

Guangzhi Sun, Potsawee Manakul, Xiao Zhan, Mark Gales


Abstract
Unlearning has emerged as a critical capability for large language models (LLMs) to support data privacy, regulatory compliance, and ethical AI deployment. Recent techniques often rely on obfuscation by injecting incorrect or irrelevant information to suppress knowledge. Such methods effectively constitute knowledge addition rather than true removal, often leaving models vulnerable to probing. In this paper, we formally distinguish unlearning from obfuscation and introduce a probing-based evaluation framework to assess whether existing approaches genuinely remove targeted information. Moreover, we propose DF-MCQ, a novel unlearning method that flattens the model predictive distribution over automatically generated multiple-choice questions using KL-divergence, effectively removing knowledge about target individuals and triggering appropriate refusal behaviour. Experimental results demonstrate that DF-MCQ achieves unlearning with over 90% refusal rate and a random choice-level uncertainty that is much higher than obfuscation on probing questions.
Anthology ID:
2025.emnlp-main.577
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11468–11478
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.577/
DOI:
Bibkey:
Cite (ACL):
Guangzhi Sun, Potsawee Manakul, Xiao Zhan, and Mark Gales. 2025. Unlearning vs. Obfuscation: Are We Truly Removing Knowledge?. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 11468–11478, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Unlearning vs. Obfuscation: Are We Truly Removing Knowledge? (Sun et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.577.pdf
Checklist:
 2025.emnlp-main.577.checklist.pdf