资源论文Estimating the Bayes Point Using Linear Knapsack Problems

Estimating the Bayes Point Using Linear Knapsack Problems

2020-02-27 | |  60 |   46 |   0

Abstract

A Bayes Point machine is a binary classifier that approximates the Bayes-optimal classifier by estimating the mean of the posterior distribution of classifier parameters. Past Bayes Point machines have overcome the intractability of this goal by using message passing techniques that approximate the posterior of the classifier parameters as a Gaussian distribution. In this paper, we investigate alternative message passing approaches that do not rely on Gaussian approximation. To make this possible, we introduce a new computational shortcut based on linear multiplechoice knapsack problems that reduces the complexity of approximating Bayes Point belief propagation messages from exponential to linear in the number of data features. Empirical tests of our approach show significant improvement in linear classification over both soft-margin SVMs and Expectation Propagation Bayes Point machines for several realworld UCI datasets.

上一篇:A New Bayesian Rating System for Team Competitions

下一篇:Approximation Bounds for Inference using Cooperative Cuts

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...