资源论文Approximation Analysis of Stochastic Gradient Langevin Dynamics by using Fokker-Planck Equation and Itoˆ Process

Approximation Analysis of Stochastic Gradient Langevin Dynamics by using Fokker-Planck Equation and Itoˆ Process

2020-03-04 | |  59 |   31 |   0

Abstract

The stochastic gradient Langevin dynamics (SGLD) algorithm is appealing for large scale Bayesian learning. The SGLD algorithm seamlessly transit stochastic optimization and Bayesian posterior sampling. However, solid theories, such as convergence proof, have not been developed. We theoretically analyze the SGLD algorithm with constant stepsize in two ways. First, we show by using the Fokker-Planck equation that the probability distribution of random variables generated by the SGLD algorithm converges to the Bayesian posterior. Second, we analyze the convergence of the SGLD algorithm by using the Ito? process, which reveals that the SGLD algorithm does not strongly but weakly converges. This result indicates that the SGLD algorithm can be an approximation method for posterior averaging.

上一篇:Dual Query: Practical Private Query Release for High Dimensional Data

下一篇:Narrowing the Gap: Random Forests In Theory and In Practice

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...