Learning and Evaluating a Differentially Private Pre-trained Language Model

Shlomo Hoory, Amir Feder, Avichai Tendler, Alon Cohen, Sofia Erell, Itay Laish, Hootan Nakhost, Uri Stemmer, Ayelet Benjamini, Avinatan Hassidim, Yossi Matias


Abstract
Contextual language models have led to significantly better results on a plethora of language understanding tasks, especially when pre-trained on the same data as the downstream task. While this additional pre-training usually improves performance, it can lead to information leakage and therefore risks the privacy of individuals mentioned in the training data. One method to guarantee the privacy of such individuals is to train a differentially-private model, but this usually comes at the expense of model performance. Moreover, it is hard to tell given a privacy parameter đťś– what was the effect on the trained representation. In this work we aim to guide future practitioners and researchers on how to improve privacy while maintaining good model performance. We demonstrate how to train a differentially-private pre-trained language model (i.e., BERT) with a privacy guarantee of đťś–=1 and with only a small degradation in performance. We experiment on a dataset of clinical notes with a model trained on a target entity extraction task, and compare it to a similar model trained without differential privacy. Finally, we present experiments showing how to interpret the differentially-private representation and understand the information lost and maintained in this process.
Anthology ID:
2021.privatenlp-1.3
Volume:
Proceedings of the Third Workshop on Privacy in Natural Language Processing
Month:
June
Year:
2021
Address:
Online
Editors:
Oluwaseyi Feyisetan, Sepideh Ghanavati, Shervin Malmasi, Patricia Thaine
Venue:
PrivateNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
21–29
Language:
URL:
https://aclanthology.org/2021.privatenlp-1.3
DOI:
10.18653/v1/2021.privatenlp-1.3
Bibkey:
Cite (ACL):
Shlomo Hoory, Amir Feder, Avichai Tendler, Alon Cohen, Sofia Erell, Itay Laish, Hootan Nakhost, Uri Stemmer, Ayelet Benjamini, Avinatan Hassidim, and Yossi Matias. 2021. Learning and Evaluating a Differentially Private Pre-trained Language Model. In Proceedings of the Third Workshop on Privacy in Natural Language Processing, pages 21–29, Online. Association for Computational Linguistics.
Cite (Informal):
Learning and Evaluating a Differentially Private Pre-trained Language Model (Hoory et al., PrivateNLP 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-2024-clasp/2021.privatenlp-1.3.pdf
Video:
 https://preview.aclanthology.org/ingest-2024-clasp/2021.privatenlp-1.3.mp4
Data
BookCorpus