资源论文Rapid Convergence of the Unadjusted Langevin Algorithm: Isoperimetry Suffices

Rapid Convergence of the Unadjusted Langevin Algorithm: Isoperimetry Suffices

2020-02-21 | |  34 |   29 |   0

Abstract

We study the Unadjusted Langevin Algorithm (ULA) for sampling from a probability distribution 图片.png We prove a convergence guarantee in KullbackLeibler (KL) divergence assuming 图片.png satisfies log-Sobolev inequality and f has bounded Hessian. Notably, we do not assume convexity or bounds on higher derivatives. We also prove convergence guarantees in Rényi divergence of order q > 1 assuming the limit of ULA satisfies either log-Sobolev or Poincaré inequality.

上一篇:Learning From Brains How to Regularize Machines

下一篇:Placeto: Learning Generalizable Device Placement Algorithms for Distributed Machine Learning

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to learn...

    The move from hand-designed features to learned...