@inproceedings{aji-heafield-2017-sparse,
    title = "Sparse Communication for Distributed Gradient Descent",
    author = "Aji, Alham Fikri  and
      Heafield, Kenneth",
    editor = "Palmer, Martha  and
      Hwa, Rebecca  and
      Riedel, Sebastian",
    booktitle = "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
    month = sep,
    year = "2017",
    address = "Copenhagen, Denmark",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/iwcs-25-ingestion/D17-1045/",
    doi = "10.18653/v1/D17-1045",
    pages = "440--445",
    abstract = "We make distributed stochastic gradient descent faster by exchanging sparse updates instead of dense updates. Gradient updates are positively skewed as most updates are near zero, so we map the 99{\%} smallest updates (by absolute value) to zero then exchange sparse matrices. This method can be combined with quantization to further improve the compression. We explore different configurations and apply them to neural machine translation and MNIST image classification tasks. Most configurations work on MNIST, whereas different configurations reduce convergence rate on the more complex translation task. Our experiments show that we can achieve up to 49{\%} speed up on MNIST and 22{\%} on NMT without damaging the final accuracy or BLEU."
}Markdown (Informal)
[Sparse Communication for Distributed Gradient Descent](https://preview.aclanthology.org/iwcs-25-ingestion/D17-1045/) (Aji & Heafield, EMNLP 2017)
ACL
- Alham Fikri Aji and Kenneth Heafield. 2017. Sparse Communication for Distributed Gradient Descent. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 440–445, Copenhagen, Denmark. Association for Computational Linguistics.