资源论文Optimal Algorithms for Smooth and Strongly Convex Distributed Optimization in Networks

Optimal Algorithms for Smooth and Strongly Convex Distributed Optimization in Networks

2020-03-09 | |  58 |   43 |   0

Abstract

In this paper, we determine the optimal convergence rates for strongly convex and smooth distributed optimization in two settings: centralized and decentralized communications over a network. For centralized (i.e. master/slave) algorithms, we show that distributing Nesterov’s accelerated gradient descent is optimal and ? achieves a precision ε > 0 in time 图片.png图片.png where 图片.png is the condition number of the (global) function to optimize, ∆ is the diameter of the network, and τ (resp. 1) is the time needed to communicate values between two neighbors (resp. perform local computations). For decentralized algorithms based on gossip, we provide the first optimal algorithm, called the multi-step dual accelerated (MSDA) method, that achieves a precision ε > 0 in time 图片.png where 图片.png is the condition number of the local functions and γ is the (normalized) eigengap of the gossip matrix used for communication between nodes. We then verify the efficiency of MSDA against state-of-theart methods for two problems: least-squares regression and classification by logistic regression.

上一篇:End-to-End Learning for Structured Prediction Energy Networks

下一篇:Convex Phase Retrieval without Lifting via PhaseMax

用户评价
全部评价

热门资源

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Rating-Boosted La...

    The performance of a recommendation system reli...

  • Hierarchical Task...

    We extend hierarchical task network planning wi...