@inproceedings{yoon-etal-2025-r,
    title = "{R}-{TOFU}: Unlearning in Large Reasoning Models",
    author = "Yoon, Sangyeon  and
      Jeung, Wonje  and
      No, Albert",
    editor = "Christodoulopoulos, Christos  and
      Chakraborty, Tanmoy  and
      Rose, Carolyn  and
      Peng, Violet",
    booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.265/",
    pages = "5239--5258",
    ISBN = "979-8-89176-332-6",
    abstract = "Large Reasoning Models (LRMs) embed private or copyrighted information not only in their final answers but also throughout multi-step chain-of-thought (CoT) traces, making reliable unlearning far more demanding than in standard LLMs. We introduce Reasoning-TOFU (R-TOFU), the first benchmark tailored to this setting. R-TOFU augments existing unlearning tasks with realistic CoT annotations and provides step-wise metrics that expose residual knowledge invisible to answer-level checks. Using R-TOFU, we carry out a comprehensive comparison of gradient-based and preference-optimization baselines and show that conventional answer-only objectives leave substantial forget traces in reasoning. We further propose Reasoned IDK, a preference-optimization variant that preserves coherent yet inconclusive reasoning, achieving a stronger balance between forgetting efficacy and model utility than earlier refusal styles. Finally, we identify a failure mode: decoding variants such as ZeroThink and LessThink can still reveal forgotten content despite seemingly successful unlearning, emphasizing the need to evaluate models under diverse decoding settings. Together, the benchmark, analysis, and new baseline establish a systematic foundation for studying and improving unlearning in LRMs while preserving their reasoning capabilities."
}Markdown (Informal)
[R-TOFU: Unlearning in Large Reasoning Models](https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.265/) (Yoon et al., EMNLP 2025)
ACL
- Sangyeon Yoon, Wonje Jeung, and Albert No. 2025. R-TOFU: Unlearning in Large Reasoning Models. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 5239–5258, Suzhou, China. Association for Computational Linguistics.