资源论文Convergence Analysis of Proximal Gradient with Momentum for Nonconvex Optimization

Convergence Analysis of Proximal Gradient with Momentum for Nonconvex Optimization

2020-03-10 | |  63 |   48 |   0

Abstract

In this work, we investigate the accelerated proximal gradient method for nonconvex programming (APGnc). The method compares between a usual proximal gradient step and a linear extrapolation step, and accepts the one that has a lower function value to achieve a monotonic decrease. In specific, under a general nonsmooth and nonconvex setting, we provide a rigorous argument to show that the limit points of the sequence generated by APGnc are critical points of the objective function. Then, by exploiting the Kurdyka-?ojasiewicz (图片.png) property for a broad class of functions, we establish the linea and sub-linear convergence rates of the function value sequence generated by APGnc. We further propose a stochastic variance reduced APGnc (SVRG-APGnc), and establish its linear convergence under a special case of the 图片.png property. We also extend the analysis to the inexact version of these methods and develop an adaptive momentum strategy that improves the numerical performance.

上一篇:Deep IV: A Flexible Approach for Counterfactual Prediction

下一篇:Automatic Discovery of the Statistical Types of Variables in a Dataset

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...