资源论文Composite Functional Gradient Learning of Generative Adversarial Models

Composite Functional Gradient Learning of Generative Adversarial Models

2020-03-19 | |  58 |   36 |   0

Abstract

This paper first presents a theory for generative adversarial methods that does not rely on the traditional minimax formulation. It shows that with a strong discriminator, a good generator can be learned so that the KL divergence between the distributions of real data and generated data improves after each functional gradient step until it converges to zero. Based on the theory, we propose a new stable generative adversarial method. A theoretical insight into the original GAN from this new viewpoint is also provided. The experiments on image generation show the effectiveness of our new method.

上一篇:High-Quality Prediction Intervals for Deep Learning: A Distribution-Free, Ensembled Approach

下一篇:QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...