资源论文Optimal Algorithms for Non-Smooth Distributed Optimization in Networks

Optimal Algorithms for Non-Smooth Distributed Optimization in Networks

2020-02-17 | |  45 |   45 |   0

Abstract 

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, ? for non-smooth functions, while the dominant term of the error is in image.png the structure of the communication network only impacts a second-order term in image.png where t is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a image.png multiplicative factor of the optimal convergence rate, where d is the underlying dimension.

上一篇:DAGs with NO TEARS: Continuous Optimization for Structure Learning

下一篇:On gradient regularizers for MMD GANs

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...