Abstract
Generative Adversarial Nets (GANs) are very successful
at modeling distributions from given samples, even in the
high-dimensional case. However, their formulation is also
known to be hard to optimize and often not stable. While
this is particularly true for early GAN formulations, there
has been significant empirically motivated and theoretically
founded progress to improve stability, for instance, by using
the Wasserstein distance rather than the Jenson-Shannon
divergence. Here, we consider an alternative formulation for
generative modeling based on random projections which, in
its simplest form, results in a single objective rather than a
saddle-point formulation. By augmenting this approach with
a discriminator we improve its accuracy. We found our approach to be significantly more stable compared to even the
improved Wasserstein GAN. Further, unlike the traditional
GAN loss, the loss formulated in our method is a good measure of the actual distance between the distributions and, for
the first time for GAN training, we are able to show estimates
for the same