资源论文Which Training Methods for GANs do actually Converge?

Which Training Methods for GANs do actually Converge?

2020-03-16 | |  52 |   66 |   0

Abstract

Recent work has shown local convergence of GAN training for absolutely continuous data and generator distributions. In this paper, we show that the requirement of absolute continuity is nec essary: we describe a simple yet prototypical counterexample showing that in the more realistic case of distributions that are not absolutel continuous, unregularized GAN training is not always convergent. Furthermore, we discuss regularization strategies that were recently proposed to stabilize GAN training. Our analysis shows that GAN training with instance noise or zerocentered gradient penalties converges. On the other hand, we show that Wasserstein-GANs and WGAN-GP with a finite number of discriminator updates per generator update do not always converge to the equilibrium point. We discuss these results, leading us to a new explanation for the stability problems of GAN training. Based on our analysis, we extend our convergence results to more general GANs and prove local convergence for simplified gradient penalties even if th generator and data distributions lie on lower dimensional manifolds. We find these penalties to work well in practice and use them to learn highresolution generative image models for a variety of datasets with little hyperparameter tuning.

上一篇:An Inference-Based Policy Gradient Method for Learning Options

下一篇:Self-Bounded Prediction Suffix Tree via Approximate String Matching

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...