资源论文Stochastic Variance Reduction for Nonconvex Optimization

Stochastic Variance Reduction for Nonconvex Optimization

2020-03-06 | |  62 |   39 |   0

Abstract

We study nonconvex finite-sum problems and analyze stochastic variance reduced gradient (S VRG) methods for them. S VRG and related methods have recently surged into prominence for convex optimization given their edge over stochastic gradient descent (S GD); but their theoretical analysis almost exclusively assumes convexity. In contrast, we obtain non-asymptotic rates of convergence of S VRG for nonconvex optimization, showing that it is provably faster than S GD and gradient descent. We also analyze a subclass of nonconvex problems on which S VRG attains linear convergence to the global optimum. We extend our analysis to mini-batch variants, showing (theoretical) linear speedup due to minibatching in parallel settings.

上一篇:A Kernel Test of Goodness of Fit

下一篇:On the Statistical Limits of Convex Relaxations: A Case Study

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...