Selective Differential Privacy for Language Modeling

Weiyan Shi, Aiqi Cui, Evan Li, Ruoxi Jia, Zhou Yu


Abstract
With the increasing applications of language models, it has become crucial to protect these models from leaking private information. Previous work has attempted to tackle this challenge by training RNN-based language models with differential privacy guarantees.However, applying classical differential privacy to language models leads to poor model performance as the underlying privacy notion is over-pessimistic and provides undifferentiated protection for all tokens in the data. Given that the private information in natural language is sparse (for example, the bulk of an email might not carry personally identifiable information), we propose a new privacy notion, selective differential privacy, to provide rigorous privacy guarantees on the sensitive portion of the data to improve model utility. To realize such a new notion, we develop a corresponding privacy mechanism, Selective-DPSGD, for RNN-based language models. Besides language modeling, we also apply the method to a more concrete application – dialog systems. Experiments on both language modeling and dialog system building show that the proposed privacy-preserving mechanism achieves better utilities while remaining safe under various privacy attacks compared to the baselines. The data and code are released at https://github.com/wyshi/lm_privacy to facilitate future research.
Anthology ID:
2022.naacl-main.205
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
July
Year:
2022
Address:
Seattle, United States
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2848–2859
Language:
URL:
https://aclanthology.org/2022.naacl-main.205
DOI:
10.18653/v1/2022.naacl-main.205
Bibkey:
Cite (ACL):
Weiyan Shi, Aiqi Cui, Evan Li, Ruoxi Jia, and Zhou Yu. 2022. Selective Differential Privacy for Language Modeling. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2848–2859, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Selective Differential Privacy for Language Modeling (Shi et al., NAACL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2022.naacl-main.205.pdf
Software:
 2022.naacl-main.205.software.zip
Video:
 https://preview.aclanthology.org/emnlp-22-attachments/2022.naacl-main.205.mp4
Code
 wyshi/lm_privacy
Data
WikiText-2