Stealing Training Data from Large Language Models in Decentralized Training through Activation Inversion Attack

Chenxi Dai, Lin Lu, Pan Zhou


Abstract
Decentralized training has become a resource-efficient framework to democratize the training of large language models (LLMs). However, the privacy risks associated with this framework, particularly due to the potential inclusion of sensitive data in training datasets, remain unexplored. This paper identifies a novel and realistic attack surface: the privacy leakage from training data in decentralized training, and proposes activation inversion attack (AIA) for the first time. AIA first constructs a shadow dataset comprising text labels and corresponding activations using public datasets. Leveraging this dataset, an attack model can be trained to reconstruct the training data from activations in victim decentralized training. We conduct extensive experiments on various LLMs and publicly available datasets to demonstrate the susceptibility of decentralized training to AIA. These findings highlight the urgent need to enhance security measures in decentralized training to mitigate privacy risks in training LLMs.
Anthology ID:
2025.acl-long.707
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14539–14551
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.707/
DOI:
Bibkey:
Cite (ACL):
Chenxi Dai, Lin Lu, and Pan Zhou. 2025. Stealing Training Data from Large Language Models in Decentralized Training through Activation Inversion Attack. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 14539–14551, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Stealing Training Data from Large Language Models in Decentralized Training through Activation Inversion Attack (Dai et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.707.pdf