资源论文Accelerated Proximal Gradient Methods for Nonconvex Programming

Accelerated Proximal Gradient Methods for Nonconvex Programming

2020-02-04 | |  78 |   46 |   0

Abstract

 Nonconvex and nonsmooth problems have recently received considerable attention in signal/image processing, statistics and machine learning. However, solving the nonconvex and nonsmooth optimization problems remains a big challenge. Accelerated proximal gradient (APG) is an excellent method for convex programming. However, it is still unknown whether the usual APG can ensure the convergence to a critical point in nonconvex programming. In this paper, we extend APG for general nonconvex and nonsmooth programs by introducing a monitor that satisfies the sufficient descent property. Accordingly, we propose a monotone APG and a nonmonotone APG. The latter waives the requirement on monotonic reduction of the objective function and needs less computation in each iteration. To the best of our knowledge, we are the first to provide APG-type algorithms for general nonconvex and nonsmooth problems ensuring that every  accumulation point is a critical point, and the convergence rates remain image.png when the problems are convex, in which k is the number of iterations. Numerical results testify to the advantage of our algorithms in speed.

上一篇:Efficient Compressive Phase Retrieval with Constrained Sensing Vectors

下一篇:Taming the Wild: A Unified Analysis of H OGWILD !-Style Algorithms

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...