资源论文Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solution for Nonconvex Distributed Optimization Over Networks

Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solution for Nonconvex Distributed Optimization Over Networks

2020-03-16 | |  58 |   42 |   0

Abstract

In this work, we study two first-order primal-dual based algorithms, the Gradient Primal-Dual Algorithm (GPDA) and the Gradient Alternating Direction Method of Multipliers (GADMM), for solving a class of linearly constrained non-convex optimization problems. We show that with random initialization of the primal and dual variable both algorithms are able to compute second-order stationary solutions (ss2) with probability one. This is the first result showing that primal-dual gorithm is capable of finding ss2 when only using first-order information; it also extends the exist results for first-order, but primal-only algorithm An important implication of our result is that it also gives rise to the first global convergence re sult to the ss2, for two classes of unconstrained distributed non-convex learning problems over multi-agent networks.

上一篇:Reviving and Improving Recurrent Back-Propagation

下一篇:The Dynamics of Learning: A Random Matrix Approach

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...