资源论文S PIDER: Near-Optimal Non-Convex Optimization via Stochastic Path Integrated Differential Estimator

S PIDER: Near-Optimal Non-Convex Optimization via Stochastic Path Integrated Differential Estimator

2020-02-14 | |  46 |   52 |   0

Abstract 

In this paper, we propose a new technique named Stochastic Path-Integrated Differential EstimatoR (S PIDER), which can be used to track many deterministic quantities of interests with significantly reduced computational cost. Combining S PIDER with the method of normalized gradient descent, we propose S PIDER-SFO that solve non-convex stochastic optimization problems using stochastic gradients only. We provide a few error-bound results on its convergence rates. Specially, we prove that the S PIDER-SFO algorithm achieves a gradient computation cost of image.png to find an image.pngapproximate first-order stationary point. In addition, we prove that S PIDER-SFO nearly matches the algorithmic lower bound for finding stationary point under the gradient Lipschitz assumption in the finite-sum setting. Our S PIDER technique can be further applied to find an image.png-approximate second-orderstationary point at a gradient computation cost of image.png

上一篇:Deep State Space Models for Time Series Forecasting

下一篇:Training DNNs with Hybrid Block Floating Point

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...