Abstract
In this work, we study two first-order primal-dual based algorithms, the Gradient Primal-Dual Algorithm (GPDA) and the Gradient Alternating Direction Method of Multipliers (GADMM), for solving a class of linearly constrained non-convex optimization problems. We show that with random initialization of the primal and dual variable both algorithms are able to compute second-order stationary solutions (ss2) with probability one. This is the first result showing that primal-dual gorithm is capable of finding ss2 when only using first-order information; it also extends the exist results for first-order, but primal-only algorithm An important implication of our result is that it also gives rise to the first global convergence re sult to the ss2, for two classes of unconstrained distributed non-convex learning problems over multi-agent networks.