资源论文Variance Reduction for Faster Non-Convex Optimization

Variance Reduction for Faster Non-Convex Optimization

2020-03-06 | |  62 |   41 |   0

Abstract

We consider the fundamental problem in nonconvex optimization of efficiently reaching a stationary point. In contrast to the convex case, in the long history of this basic problem, the only known theoretical results on first-order nonconvex optimization remain to be full gradient descent that converges in O(1/ε) iterations for smooth objectives, and stochastic gradient descent that converges in O(1/图片.png) iterations for ob jectives that are sum of smooth functions. We provide the first improvement in this line of research. Our result is based on the variance reduction trick recently introduced to convex optimization, as well as a brand new analysis of variance reduction that is suitable for non-convex optimization. For objectives that are sum of smooth functions, our first-order minibatch stochastic method converges with an O(1/ε) rate, and is faster than full gradient descent by 图片.png We demonstrate the effectiveness of our methods on empirical risk minimizations with non-convex loss functions and training neural nets.1

上一篇:Hawkes Processes with Stochastic Excitations

下一篇:Horizontally Scalable Submodular Maximization

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...