资源论文An Adaptive Accelerated Proximal Gradient Method and its Homotopy Continuation for Sparse Optimization

An Adaptive Accelerated Proximal Gradient Method and its Homotopy Continuation for Sparse Optimization

2020-03-04 | |  59 |   42 |   0

Abstract

We first propose an adaptive accelerated proximal gradient (APG) method for minimizing strongly convex composite functions with unknown convexity parameters. This method incorporates a restarting scheme to automatically estimate the strong convexity parameter and achieves a nearly optimal iteration complexity. Then we consider the 图片.png-regularized leastsquares (图片.png -LS) problem in the high-dimensional setting. Although such an objective function is not strongly convex, it has restricted strong convexity over sparse vectors. We exploit this property by combining the adaptive APG method with a homotopy continuation scheme, which generates a sparse solution path towards optimality. This method obtains a global linear rate of convergence and its overall iteration complexity has a weaker dependency on the restricted condition number than previous work.

上一篇:Learning Ordered Representations with Nested Dropout

下一篇:Probabilistic Matrix Factorization with Non-random Missing Data

用户评价
全部评价

热门资源

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Rating-Boosted La...

    The performance of a recommendation system reli...

  • Hierarchical Task...

    We extend hierarchical task network planning wi...