Ruining Yang
2025
ObfusLM: Privacy-preserving Language Model Service against Embedding Inversion Attacks
Yu Lin
|
Ruining Yang
|
Yunlong Mao
|
Qizhi Zhang
|
Jue Hong
|
Quanwei Cai
|
Ye Wu
|
Huiqi Liu
|
Zhiyu Chen
|
Bing Duan
|
Sheng Zhong
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
As the rapid expansion of Machine Learning as a Service (MLaaS) for language models, concerns over the privacy of client inputs during inference or fine-tuning have correspondingly escalated. Recently, solutions have been proposed to safeguard client privacy by obfuscation techniques. However, the solutions incur notable decline in model utility and mainly focus on classification tasks, rendering them impractical for real-world applications. Moreover, recent studies reveal that these obfuscation, if not well designed, is susceptible to embedding inversion attacks (EIAs). In this paper, we devise ObfusLM, a privacy-preserving MLaaS framework for both classification and generation tasks. ObfusLM leverages a model obfuscation module to achieve privacy protection for both classification and generation tasks. Based on (k, 𝜖)-anonymity, ObfusLM includes novel obfuscation algorithms to reach provable security against EIAs. Extensive experiments show that ObfusLM outperforms existing works in utility by 10% with a nearly 80% resistance rate against EIAs.
2018
Development of Perceptual Training Software for Realizing High Variability Training Paradigm and Self Adaptive Training Paradigm
Ruining Yang
|
Hiroaki Nanjo
|
Masatake Dantsuji
Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation
Search
Fix author
Co-authors
- Quanwei Cai 1
- Zhiyu Chen 1
- Masatake Dantsuji 1
- Bing Duan 1
- Jue Hong 1
- show all...