资源论文Nonparametric density estimation & convergence of GANs under Besov IPM losses

Nonparametric density estimation & convergence of GANs under Besov IPM losses

2020-02-23 | |  51 |   39 |   0

Abstract

We study the problem of estimating a nonparametric probability density under a large family of losses called Besov IPMs, which include, for example, Lp distances, total variation distance, and generalizations of both Wasserstein and KolmogorovSmirnov distances. For a wide variety of settings, we provide both lower and upper bounds, identifying precisely how the choice of loss function and assumptions on the data interact to determine the minimax optimal convergence rate. We also show that linear distribution estimates, such as the empirical distribution or kernel density estimator, often fail to converge at the optimal rate. Our bounds generalize, unify, or improve several recent and classical results. Moreover, IPMs can be used to formalize a statistical model of generative adversarial networks (GANs). Thus, we show how our results imply bounds on the statistical error of a GAN, showing, for example, that GANs can strictly outperform the best linear estimator.

上一篇:Reliable training and estimation of variance networks

下一篇:Learning Multiple Markov Chains via Adaptive Allocation

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...