Abstract
Federated learning is a rapidly growing area of research, holding the promise of privacy-preserving distributed training on edge devices. The largest barrier to wider adoption of federated learning is the communication cost of model updates, which is accentuated by the fact that many edge devices are bandwidth-constrained. At the same time, within the machine learning theory community, a separate line of research has emerged around optimizing networks within a subspace of the full space of all parameters. The dimension of the smallest subspace for which these methods still yield strong results is called the intrinsic dimension. In this work, we prove a general correspondence between the notions of intrinsic dimension and gradient compressibility, and we show that a family of low-bandwidth federated learning algorithms, which we call intrinsic gradient compression algorithms, naturally emerges from this correspondence. Finally, we conduct large-scale NLP experiments using transformer models with over 100M parameters (GPT-2 and BERT), and show that our method significantly outperforms the state-of-the-art in gradient compression.- Anthology ID:
- 2022.fl4nlp-1.4
- Volume:
- Proceedings of the First Workshop on Federated Learning for Natural Language Processing (FL4NLP 2022)
- Month:
- May
- Year:
- 2022
- Address:
- Dublin, Ireland
- Editors:
- Bill Yuchen Lin, Chaoyang He, Chulin Xie, Fatemehsadat Mireshghallah, Ninareh Mehrabi, Tian Li, Mahdi Soltanolkotabi, Xiang Ren
- Venue:
- FL4NLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 27–41
- Language:
- URL:
- https://aclanthology.org/2022.fl4nlp-1.4
- DOI:
- 10.18653/v1/2022.fl4nlp-1.4
- Cite (ACL):
- Luke Melas-Kyriazi and Franklyn Wang. 2022. Intrinsic Gradient Compression for Scalable and Efficient Federated Learning. In Proceedings of the First Workshop on Federated Learning for Natural Language Processing (FL4NLP 2022), pages 27–41, Dublin, Ireland. Association for Computational Linguistics.
- Cite (Informal):
- Intrinsic Gradient Compression for Scalable and Efficient Federated Learning (Melas-Kyriazi & Wang, FL4NLP 2022)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-3/2022.fl4nlp-1.4.pdf
- Data
- SST, SST-2