资源论文Variational Inference with Tail-adaptive f -Divergence

Variational Inference with Tail-adaptive f -Divergence

2020-02-14 | |  65 |   54 |   0

Abstract

 Variational inference with image.png-divergences has been widely used in modern probabilistic machine learning. Compared to Kullback-Leibler (KL) divergence, a major advantage of using image.png-divergences (with positive image.png values) is their mass-covering property. However, estimating and optimizing image.png-divergences require to use importance sampling, which may have large or infinite variance due to heavy tails of importance weights. In this paper, we propose a new class of tail-adaptive f divergences that adaptively change the convex function f with the tail distribution of the importance weights, in a way that theoretically guarantees finite moments, while simultaneously achieving mass-covering properties. We test our method on Bayesian neural networks, and apply it to improve a recent soft actor-critic (SAC) algorithm (Haarnoja et al., 2018) in deep reinforcement learning. Our results show that our approach yields significant advantages compared with existing methods based on classical KL and image.png-divergences.

上一篇:Practical Deep Stereo (PDS): Toward applications-friendly deep stereo matching.

下一篇:Asymptotic optimality of adaptive importance sampling

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...