@inproceedings{parii-etal-2025-machine,
    title = "Machine Unlearning of Personally Identifiable Information in Large Language Models",
    author = "Parii, Dan  and
      van Osch, Thomas  and
      Sun, Chang",
    editor = "Aletras, Nikolaos  and
      Chalkidis, Ilias  and
      Barrett, Leslie  and
      Goanț{\u{a}}, C{\u{a}}t{\u{a}}lina  and
      Preoțiuc-Pietro, Daniel  and
      Spanakis, Gerasimos",
    booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2025",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.nllp-1.6/",
    pages = "54--67",
    ISBN = "979-8-89176-338-8",
    abstract = "Pretrained LLMs are trained on massive web-scale datasets, which often contain personally identifiable information (PII), raising serious legal and ethical concerns. A key research challenge is how to effectively unlearn PII without degrading the model{'}s utility or leaving implicit knowledge that can be exploited.This study proposes UnlearnPII, a benchmark designed to evaluate the effectiveness of PII unlearning methods, addressing limitations in existing metrics that overlook implicit knowledge and assess all tokens equally. Our benchmark focuses on detecting PII leakage, testing model robustness through obfuscated prompts and jailbreak attacks over different domains, while measuring utility and retention quality.To advance practical solutions, we propose a new PII unlearning method - $\text{PERMU}_{\text{tok}}$. By applying token-level noise, we achieve 1) simplified integration into existing workflows, 2) improved retention and output quality, while maintaining unlearning effectiveness. The code is open-source and publicly available."
}Markdown (Informal)
[Machine Unlearning of Personally Identifiable Information in Large Language Models](https://preview.aclanthology.org/ingest-emnlp/2025.nllp-1.6/) (Parii et al., NLLP 2025)
ACL