资源论文Local SGD with Periodic Averaging: Tighter Analysis and Adaptive Synchronization

Local SGD with Periodic Averaging: Tighter Analysis and Adaptive Synchronization

2020-02-23 | |  62 |   41 |   0

Abstract

Communication overhead is one of the key challenges that hinders the scalability of distributed optimization algorithms. In this paper, we study local distributed SGD, where data is partitioned among computation nodes, and the computation nodes perform local updates with periodically exchanging the model among the workers to perform averaging. While local SGD is empirically shown to provide promising results, a theoretical understanding of its performance remains open. We strengthen convergence analysis for local SGD, and show that local SGD can be far less expensive and applied far more generally than current theory suggests. Specifically, we show that for loss functions that satisfy the Polyak?ojasiewicz condition,图片.pngrounds of communication suffice to achieve a linear speed up, that is, an error of 图片.png where T is the total number of model updates at each worker. This is in contrast with previous work which required higher number of communication rounds, as well as was limited to strongly convex loss functions, for a similar asymptotic performance. We also develop an adaptive synchronization scheme that provides a general condition for linear speed up. Finally, we validate the theory with experimental results, running over AWS EC2 clouds and an internal GPU cluster.

上一篇:Robust Bi-Tempered Logistic Loss Based on Bregman Divergences

下一篇:Distinguishing Distributions When Samples Are Strategically Transformed

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...