Disentangling the Linguistic Competence of Privacy-Preserving BERT

Stefan Arnold, Nils Kemmerzell, Annika Schreiner


Abstract
Differential Privacy (DP) has been tailored to address the unique challenges of text-to-text privatization. However, text-to-text privatization is known for degrading the performance of language models when trained on perturbed text. Employing a series of interpretation techniques on the internal representations extracted from BERT trained on perturbed pre-text, we intend to disentangle at the linguistic level the distortion induced by differential privacy. Experimental results from a representational similarity analysis indicate that the overall similarity of internal representations is substantially reduced. Using probing tasks to unpack this dissimilarity, we find evidence that text-to-text privatization affects the linguistic competence across several formalisms, encoding localized properties of words while falling short at encoding the contextual relationships between spans of words.
Anthology ID:
2023.blackboxnlp-1.5
Volume:
Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP
Month:
December
Year:
2023
Address:
Singapore
Editors:
Yonatan Belinkov, Sophie Hao, Jaap Jumelet, Najoung Kim, Arya McCarthy, Hosein Mohebbi
Venues:
BlackboxNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
65–75
Language:
URL:
https://aclanthology.org/2023.blackboxnlp-1.5
DOI:
10.18653/v1/2023.blackboxnlp-1.5
Bibkey:
Cite (ACL):
Stefan Arnold, Nils Kemmerzell, and Annika Schreiner. 2023. Disentangling the Linguistic Competence of Privacy-Preserving BERT. In Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, pages 65–75, Singapore. Association for Computational Linguistics.
Cite (Informal):
Disentangling the Linguistic Competence of Privacy-Preserving BERT (Arnold et al., BlackboxNLP-WS 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2023.blackboxnlp-1.5.pdf