资源论文Message passing with l1 penalized KL minimization

Message passing with l1 penalized KL minimization

2020-03-02 | |  62 |   30 |   0

Abstract

Bayesian inference is often hampered by large computational expense. As a generalization of belief propagation (BP), expectation propagation (EP) approximates exact Bayesian computation with efficient message passing updates. However, when an approximation family used by EP is far from exact posterior distributions, message passing may lead to poor approximation quality and suffer from divergence. To address this issue, we propose an approximate inference method, relaxed expectation propagation (REP), based on a new divergence with a l1 penalty. Minimizing this penalized divergence adaptively relaxes EP’s moment matching requirement for message passing. We apply REP to Gaussian process classification and experimental results demonstrate significant improvement of REP over EP and ?-divergence based power EP—in terms of algorithmic stability, estimation accuracy and predictive performance. Furthermore, we develop relaxed belief propagation (RBP), a special case of REP, to conduct inference on discrete Markov random fields (MRFs). Our results show improved estimation accuracy of RBP over BP and fractional BP when interactions between MRF nodes are strong.

上一篇:Efficient Ranking from Pairwise Comparisons

下一篇:Topic Model Diagnostics: Assessing Domain Relevance via Topical Alignment

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Joint Pose and Ex...

    Facial expression recognition (FER) is a challe...