资源论文MADE: Masked Autoencoder for Distribution Estimation

MADE: Masked Autoencoder for Distribution Estimation

2020-03-05 | |  99 |   43 |   0

Abstract

There has been a lot of recent interest in designin neural network models to estimate a distribution from a set of examples. We introduce a simple modification for autoencoder neural networks that yields powerful generative models. Our method masks the autoencoder’s parameters to respect autoregressive constraints: each input is reconstructed only from previous inputs in a given ordering. Constrained this way, the autoencoder outputs can be interpreted as a set of conditional probabilities, and their product, the full joint pr ability. We can also train a single network that can decompose the joint probability in multiple different orderings. Our simple framework can be applied to multiple architectures, including deep ones. Vectorized implementations, such as on GPUs, are simple and fast. Experiments demonstrate that this approach is competitive with state of-the-art tractable distribution estimators. At te time, the method is significantly faster and scales better than other autoregressive estimators.

上一篇:Distributed Gaussian Processes

下一篇:K-hyperplane Hinge-Minimax Classifier

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...