Opt-Out: Investigating Entity-Level Unlearning for Large Language Models via Optimal Transport

Minseok Choi, Daniel Rim, Dohyun Lee, Jaegul Choo


Abstract
Instruction-following large language models (LLMs), such as ChatGPT, have become widely popular among everyday users. However, these models inadvertently disclose private, sensitive information to their users, underscoring the need for machine unlearning techniques to remove selective information from the models. While prior work has focused on forgetting small, random subsets of training data at the instance-level, we argue that real-world scenarios often require the removal of an entire user data, which may require a more careful maneuver. In this study, we explore entity-level unlearning, which aims to erase all knowledge related to a target entity while preserving the remaining model capabilities. To address this, we introduce Opt-Out, an optimal transport-based unlearning method that utilizes the Wasserstein distance from the model’s initial parameters to achieve more effective and fine-grained unlearning. We also present the first Entity-Level Unlearning Dataset (ELUDe) designed to evaluate entity-level unlearning. Our empirical results demonstrate that Opt-Out surpasses existing methods, establishing a new standard for secure and adaptable LLMs that can accommodate user data removal requests without the need for full retraining.
Anthology ID:
2025.acl-long.1371
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
28280–28297
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.1371/
DOI:
Bibkey:
Cite (ACL):
Minseok Choi, Daniel Rim, Dohyun Lee, and Jaegul Choo. 2025. Opt-Out: Investigating Entity-Level Unlearning for Large Language Models via Optimal Transport. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 28280–28297, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Opt-Out: Investigating Entity-Level Unlearning for Large Language Models via Optimal Transport (Choi et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.1371.pdf