Avoiding Copyright Infringement via Large Language Model Unlearning

Guangyao Dou, Zheyuan Liu, Qing Lyu, Kaize Ding, Eric Wong


Abstract
Pre-trained Large Language Models (LLMs) have demonstrated remarkable capabilities but also pose risks by learning and generating copyrighted material, leading to significant legal and ethical concerns. In real-world scenarios, model owners need to continuously address copyright infringement as new requests for content removal emerge at different time points. This leads to the need for sequential unlearning, where copyrighted content is removed sequentially as new requests arise. Despite its practical relevance, sequential unlearning in the context of copyright infringement has not been rigorously explored in existing literature. To address this gap, we propose Stable Sequential Unlearning (SSU), a novel framework designed to unlearn copyrighted content from LLMs over multiple time steps. Our approach works by identifying and removing specific weight updates in the model’s parameters that correspond to copyrighted content. We improve unlearning efficacy by introducing random labeling loss and ensuring the model retains its general-purpose knowledge by adjusting targeted parameters. Experimental results show that SSU achieves an effective trade-off between unlearning efficacy and general-purpose language abilities, outperforming existing baselines.
Anthology ID:
2025.findings-naacl.288
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5176–5200
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.288/
DOI:
Bibkey:
Cite (ACL):
Guangyao Dou, Zheyuan Liu, Qing Lyu, Kaize Ding, and Eric Wong. 2025. Avoiding Copyright Infringement via Large Language Model Unlearning. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 5176–5200, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Avoiding Copyright Infringement via Large Language Model Unlearning (Dou et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.288.pdf