Abstract
Transfer learning is an effective technique to improve a target recommender system with the knowledge from a source domain. Existing research focuses on the recommendation performance of the target domain while ignores the privacy leakage of the source domain. The transferred knowledge, however, may unintendedly leak private information of the source domain. For example, an attacker can accurately infer user demographics from their historical purchase provided by a source domain data owner. This paper addresses the above privacy-preserving issue by learning a privacy-aware neural representation by improving target performance while protecting source privacy. The key idea is to simulate the attacks during the training for protecting unseen users’ privacy in the future, modeled by an adversarial game, so that the transfer learning model becomes robust to attacks. Experiments show that the proposed PrivNet model can successfully disentangle the knowledge benefitting the transfer from leaking the privacy.- Anthology ID:
- 2020.findings-emnlp.404
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2020
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 4506–4516
- Language:
- URL:
- https://aclanthology.org/2020.findings-emnlp.404
- DOI:
- 10.18653/v1/2020.findings-emnlp.404
- Cite (ACL):
- Guangneng Hu and Qiang Yang. 2020. PrivNet: Safeguarding Private Attributes in Transfer Learning for Recommendation. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4506–4516, Online. Association for Computational Linguistics.
- Cite (Informal):
- PrivNet: Safeguarding Private Attributes in Transfer Learning for Recommendation (Hu & Yang, Findings 2020)
- PDF:
- https://preview.aclanthology.org/remove-xml-comments/2020.findings-emnlp.404.pdf
- Data
- MovieLens