Abstract
Ensuring strong theoretical privacy guarantees on text data is a challenging problem which is usually attained at the expense of utility. However, to improve the practicality of privacy preserving text analyses, it is essential to design algorithms that better optimize this tradeoff. To address this challenge, we propose a release mechanism that takes any (text) embedding vector as input and releases a corresponding private vector. The mechanism satisfies an extension of differential privacy to metric spaces. Our idea based on first randomly projecting the vectors to a lower-dimensional space and then adding noise in this projected space generates private vectors that achieve strong theoretical guarantees on its utility. We support our theoretical proofs with empirical experiments on multiple word embedding models and NLP datasets, achieving in some cases more than 10% gains over the existing state-of-the-art privatization techniques.- Anthology ID:
- 2021.trustnlp-1.3
- Volume:
- Proceedings of the First Workshop on Trustworthy Natural Language Processing
- Month:
- June
- Year:
- 2021
- Address:
- Online
- Editors:
- Yada Pruksachatkun, Anil Ramakrishna, Kai-Wei Chang, Satyapriya Krishna, Jwala Dhamala, Tanaya Guha, Xiang Ren
- Venue:
- TrustNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 15–27
- Language:
- URL:
- https://aclanthology.org/2021.trustnlp-1.3
- DOI:
- 10.18653/v1/2021.trustnlp-1.3
- Cite (ACL):
- Oluwaseyi Feyisetan and Shiva Kasiviswanathan. 2021. Private Release of Text Embedding Vectors. In Proceedings of the First Workshop on Trustworthy Natural Language Processing, pages 15–27, Online. Association for Computational Linguistics.
- Cite (Informal):
- Private Release of Text Embedding Vectors (Feyisetan & Kasiviswanathan, TrustNLP 2021)
- PDF:
- https://preview.aclanthology.org/cschoel_rss_and_blog/2021.trustnlp-1.3.pdf
- Data
- MPQA Opinion Corpus, SST, SST-5