Combining Global Sparse Gradients with Local Gradients in Distributed Neural Network Training

Alham Fikri Aji, Kenneth Heafield, Nikolay Bogoychev


Abstract
One way to reduce network traffic in multi-node data-parallel stochastic gradient descent is to only exchange the largest gradients. However, doing so damages the gradient and degrades the model’s performance. Transformer models degrade dramatically while the impact on RNNs is smaller. We restore gradient quality by combining the compressed global gradient with the node’s locally computed uncompressed gradient. Neural machine translation experiments show that Transformer convergence is restored while RNNs converge faster. With our method, training on 4 nodes converges up to 1.5x as fast as with uncompressed gradients and scales 3.5x relative to single-node training.
Anthology ID:
D19-1373
Volume:
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Kentaro Inui, Jing Jiang, Vincent Ng, Xiaojun Wan
Venues:
EMNLP | IJCNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
3626–3631
Language:
URL:
https://aclanthology.org/D19-1373
DOI:
10.18653/v1/D19-1373
Bibkey:
Cite (ACL):
Alham Fikri Aji, Kenneth Heafield, and Nikolay Bogoychev. 2019. Combining Global Sparse Gradients with Local Gradients in Distributed Neural Network Training. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3626–3631, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Combining Global Sparse Gradients with Local Gradients in Distributed Neural Network Training (Aji et al., EMNLP-IJCNLP 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-dup-bibkey/D19-1373.pdf