资源论文Augment and Reduce: Stochastic Inference for Large Categorical Distributions

Augment and Reduce: Stochastic Inference for Large Categorical Distributions

2020-03-11 | |  66 |   57 |   0

Abstract

Categorical distributions are ubiquitous in machine learning, e.g., in classification, language models, and recommendation systems. However, when the number of possible outcomes is very large, using categorical distributions becomes computationally expensive, as the complexity scales linearly with the number of outcomes. To address this problem, we propose augment and reduce (A & R), a method to alleviate the computational complexity. A & R uses two ideas: latent variable augmentation and stochastic variational inference. It maximizes a lower bound on the marginal likelihood of the data. Unlike existing methods which are specific to softmax, A & R is more general and is amenable to other categorical models, such as multinomial probit. On several large-scale classification problems, we show that A & R provides a tighter bound on the marginal likelihood and has better predictive performance than existing approaches.

上一篇:Quasi-Monte Carlo Variational Inference

下一篇:On the Spectrum of Random Features Maps of High Dimensional Data

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...