Mingyi Hong
2025
Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate
Xiaomeng Jin
|
Zhiqi Bu
|
Bhanukiran Vinzamuri
|
Anil Ramakrishna
|
Kai-Wei Chang
|
Volkan Cevher
|
Mingyi Hong
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Machine unlearning has been used to remove unwanted knowledge acquired by large language models (LLMs). In this paper, we examine machine unlearning from an optimization perspective, framing it as a regularized multi-task optimization problem, where one task optimizes a forgetting objective and another optimizes the model performance. In particular, we introduce a normalized gradient difference algorithm, enabling us to have better control over the trade-off between the objectives, while integrating a new, automatic learning rate scheduler. We provide a theoretical analysis and empirically demonstrate the superior performance of among state-of-the-art unlearning methods on the TOFU and MUSE datasets while exhibiting stable training.
Search
Fix data
Co-authors
- Zhiqi Bu 1
- Volkan Cevher 1
- Kai-Wei Chang 1
- Xiaomeng Jin 1
- Anil Ramakrishna 1
- show all...