资源论文Communication-efficient Distributed SGD with Sketching

Communication-efficient Distributed SGD with Sketching

2020-02-20 | |  77 |   62 |   0

Abstract

Large-scale distributed training of neural networks is often limited by network bandwidth, wherein the communication time overwhelms the local computation time. Motivated by the success of sketching methods in sub-linear/streaming algorithms, we introduce S KETCHED -SGD4 , an algorithm for carrying out distributed SGD by communicating sketches instead of full gradients. We show that S KETCHED -SGD has favorable convergence rates on several classes of functions. When considering all communication – both of gradients and of updated model weights – S KETCHED SGD reduces the amount of communication required compared to other gradient compression methods from 图片.png where d is the number of model parameters and W is the number of workers participating in training. We run experiments on a transformer model, an LSTM, and a residual network, demonstrating up to a 40x reduction in total communication cost with no loss in final model performance. We also show experimentally that S KETCHED -SGD scales to at least 256 workers without increasing communication cost or degrading model performance.

上一篇:Optimistic Distributionally Robust Optimization for Nonparametric Likelihood Approximation

下一篇:Discovering Neural Wirings

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...