资源论文Global Convergence of Langevin Dynamics Based Algorithms for Nonconvex Optimization

Global Convergence of Langevin Dynamics Based Algorithms for Nonconvex Optimization

2020-02-17 | |  75 |   40 |   0

Abstract 

We present a unified framework to analyze the global convergence of Langevin dynamics based algorithms for nonconvex finite-sum optimization with n component functions. At the core of our analysis is a direct analysis of the ergodicity of the numerical approximations to Langevin dynamics, which leads to faster convergence rates. Specifically, we show that gradient Langevin dynamics (GLD) and stochastic gradient Langevin dynamics (SGLD) converge to the almost minimizerimage.png within image.png stochastic gradient evaluations respectivelyimage.png , where d is the problem dimension, and is the spectral gap of the Markov chain generated by GLD. Both results improve upon the best known gradient complexityimage.png results [45]. Furthermore, for the first time we prove the global convergence guarantee for variance reduced stochastic gradient Langevin dynamics (SVRG-LD) to the almost minimizer within image.png stochastic gradient evaluations, which outperforms the gradient complexities of GLD and SGLD in a wide regime. Our theoretical analyses shed some light on using Langevin dynamics based algorithms for nonconvex optimization with provable guarantees.

上一篇:Computing Higher Order Derivatives of Matrix and Tensor Expressions

下一篇:RenderNet: A deep convolutional network for differentiable rendering from 3D shapes

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...