资源论文Stochastic Cubic Regularization for Fast Nonconvex Optimization

Stochastic Cubic Regularization for Fast Nonconvex Optimization

2020-02-13 | |  66 |   37 |   0

Abstract

 This paper proposes a stochastic variant of a classic algorithm—the cubicregularized Newton method [Nesterov and Polyak, 2006]. The proposed algorithm efficiently escapes saddle points and finds approximate local minima for general smooth, nonconvex functions in only image.png stochastic gradient and stochastic Hessian-vector product evaluations. The latter can be computed as efficiently as stochastic gradients. This improves upon the image.png rate of stochastic gradient descent. Our rate matches the best-known result for finding local minima without requiring any delicate acceleration or variance-reduction techniques.

上一篇:Generalizing graph matching beyond quadratic assignment model

下一篇:Video Prediction via Selective Sampling

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...