Zhiqi Bu
2025
Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate
Xiaomeng Jin
|
Zhiqi Bu
|
Bhanukiran Vinzamuri
|
Anil Ramakrishna
|
Kai-Wei Chang
|
Volkan Cevher
|
Mingyi Hong
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Machine unlearning has been used to remove unwanted knowledge acquired by large language models (LLMs). In this paper, we examine machine unlearning from an optimization perspective, framing it as a regularized multi-task optimization problem, where one task optimizes a forgetting objective and another optimizes the model performance. In particular, we introduce a normalized gradient difference algorithm, enabling us to have better control over the trade-off between the objectives, while integrating a new, automatic learning rate scheduler. We provide a theoretical analysis and empirically demonstrate the superior performance of among state-of-the-art unlearning methods on the TOFU and MUSE datasets while exhibiting stable training.
SemEval-2025 Task 4: Unlearning sensitive content from Large Language Models
Anil Ramakrishna
|
Yixin Wan
|
Xiaomeng Jin
|
Kai - Wei Chang
|
Zhiqi Bu
|
Bhanukiran Vinzamuri
|
Volkan Volkan Cevher
|
Mingyi Hong
|
Rahul Gupta
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
We introduce SemEval-2025 Task 4: unlearn- ing sensitive content from Large Language Models (LLMs). The task features 3 subtasks for LLM unlearning spanning different use cases: (1) unlearn long form synthetic creative documents spanning different genres; (2) un- learn short form synthetic biographies contain- ing personally identifiable information (PII), in- cluding fake names, phone number, SSN, email and home addresses, and (3) unlearn real docu- ments sampled from the target model’s training dataset. We received over 100 submissions from over 30 institutions and we summarize the key techniques and lessons in this paper.
Search
Fix author
Co-authors
- Kai - Wei Chang 2
- Mingyi Hong 2
- Xiaomeng Jin 2
- Anil Ramakrishna 2
- Bhanukiran Vinzamuri 2
- show all...