资源论文Bucket Renormalization for Approximate Inference

Bucket Renormalization for Approximate Inference

2020-03-20 | |  64 |   50 |   0

Abstract

Probabilistic graphical models are a key tool in machine learning applications. Computing the partition function, i.e., normalizing constant, is fundamental task of statistical inference but it i generally computationally intractable, leading to extensive study of approximation methods. Iterative variational methods are a popular and success ful family of approaches. However, even state of the art variational methods can return poor result or fail to converge on difficult instances. In thi paper, we instead consider computing the partition function via sequential summation over variables. We develop robust approximate algorithms by combining ideas from mini-bucket elimination with tensor network and renormalization group methods from statistical physics. The resulting “convergence-free” methods show good empirical performance on both synthetic and real-world benchmark models, even for difficult instances.

上一篇:Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning

下一篇:Contextual Graph Markov Model: A Deep and Generative Approach to Graph Processing

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...