资源论文Convergence of Cubic Regularization for Nonconvex Optimization under K Property

Convergence of Cubic Regularization for Nonconvex Optimization under K Property

2020-02-13 | |  66 |   36 |   0

Abstract 

Cubic-regularized Newton’s method (CR) is a popular algorithm that guarantees to produce a second-order stationary solution for solving nonconvex optimization problems. However, existing understandings of the convergence rate of CR are conditioned on special types of geometrical properties of the objective function. In this paper, we explore the asymptotic convergence rate of CR by exploiting the ubiquitous Kurdyka-image.pngojasiewicz (Kimage.png) property of nonconvex objective functions. In specific, we characterize the asymptotic convergence rate of various types of optimality measures for CR including function value gap, variable distance gap, gradient norm and least eigenvalue of the Hessian matrix. Our results fully characterize the diverse convergence behaviors of these optimality measures in the full parameter regime of the Kimage.png property. Moreover, we show that the obtained asymptotic convergence rates of CR are order-wise faster than those of first-order gradient descent algorithms under the Kimage.png property.

上一篇:Exact natural gradient in deep linear networks and application to the nonlinear case

下一篇:Topkapi: Parallel and Fast Sketches for Finding Top-K Frequent Elements

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...