Zhaorui Tan
2025
GradOT: Training-free Gradient-preserving Offsite-tuning for Large Language Models
Kai Yao
|
Zhaorui Tan
|
Penglei Gao
|
Lichun Li
|
Kaixin Wu
|
Yinggui Wang
|
Yuan Zhao
|
Yixin Ji
|
Jianke Zhu
|
Wei Wang
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
The rapid growth of large language models (LLMs) with traditional centralized fine-tuning emerges as a key technique for adapting these models to domain-specific challenges, yielding privacy risks for both model and data owners. One promising solution, called offsite-tuning (OT), is proposed to address these challenges, where a weaker emulator is compressed from the original model and further fine-tuned with adapter to enhance privacy. However, the existing OT-based methods require high computational costs and lack theoretical analysis. This paper introduces a novel OT approach based on gradient-preserving compression. By analyzing the OT problem through the lens of optimization, we propose a method that selectively applies compression techniques such as rank compression and channel pruning, preserving the gradients of fine-tuned adapters while ensuring privacy. Extensive experiments demonstrate that our approach surpasses existing OT methods, both in terms of privacy protection and model performance. Our method provides a theoretical foundation for OT and offers a practical, training-free solution for offsite-tuning of large-scale LLMs.
Search
Fix author
Co-authors
- Penglei Gao 1
- Yixin Ji (纪一心) 1
- Lichun Li 1
- Yinggui Wang 1
- Wei Wang 1
- show all...
Venues
- acl1