Privacy-preserving Neural Representations of Text

Maximin Coavoux, Shashi Narayan, Shay B. Cohen


Abstract
This article deals with adversarial attacks towards deep learning systems for Natural Language Processing (NLP), in the context of privacy protection. We study a specific type of attack: an attacker eavesdrops on the hidden representations of a neural text classifier and tries to recover information about the input text. Such scenario may arise in situations when the computation of a neural network is shared across multiple devices, e.g. some hidden representation is computed by a user’s device and sent to a cloud-based model. We measure the privacy of a hidden representation by the ability of an attacker to predict accurately specific private information from it and characterize the tradeoff between the privacy and the utility of neural representations. Finally, we propose several defense methods based on modified training objectives and show that they improve the privacy of neural representations.
Anthology ID:
D18-1001
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–10
Language:
URL:
https://aclanthology.org/D18-1001
DOI:
10.18653/v1/D18-1001
Bibkey:
Cite (ACL):
Maximin Coavoux, Shashi Narayan, and Shay B. Cohen. 2018. Privacy-preserving Neural Representations of Text. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1–10, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Privacy-preserving Neural Representations of Text (Coavoux et al., EMNLP 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/D18-1001.pdf
Video:
 https://vimeo.com/305202770
Code
 mcoavoux/pnet
Data
AG News