资源论文Convergence guarantees for a class of non-convex and non-smooth optimization problems

Convergence guarantees for a class of non-convex and non-smooth optimization problems

2020-03-20 | |  55 |   53 |   0

Abstract

Non-convex optimization problems arise frequently in machine learning, including feature selection, structured matrix learning, mixture model ing, and neural network training. We consider the problem of finding critical points of a broad clas of non-convex problems with non-smooth components. We analyze the behavior of two gradientbased methods—namely a sub-gradient method, and a proximal method. Our main results are to establish rates of convergence for general problems, and also exhibit faster rates for sub-analyt functions. As an application of our theory, we obtain a simplification of the popular CCCP algorithm, which retains all the desirable convergence properties of the original method, along with a significantly lower cost per iteration. We illustrate our methods and theory via application to the problems of best subset selection, robust esti mation and shape from shading reconstruction.

上一篇:A probabilistic framework for multi-view feature learning with many-to-many associations via neural networks

下一篇:Frank-Wolfe with Subsampling Oracle

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...