资源论文Stein Points

Stein Points

2020-03-20 | |  51 |   49 |   0

Abstract

An important task in computational statistics and machine learning is to approximate a posterior distribution p(x) with an empirical measure supported on a set of representative points {xi }ni=1 This paper focuses on methods where the selection of points is essentially deterministic, with emphasis on achieving accurate approximation when n is small. To this end, we present Stein Points. The idea is to exploit either a greedy or a conditional gradient method to iteratively minimise a kernel Stein discrepancy between the empirical measure and p(x). Our empirical results demonstrate that Stein Points enable accurate approximation of the posterior at modest computational cost. In addition, theoretical results ar provided to establish convergence of the method.

上一篇:Junction Tree Variational Autoencoder for Molecular Graph Generation

下一篇:Continuous-Time Flows for Efficient Inference and Density Estimation

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...