Diverse Conditional Image Generation by
Stochastic Regression with
Latent Drop-Out Codes
Abstract. Recent advances in Deep Learning and probabilistic modeling have led to strong improvements in generative models for images. On
the one hand, Generative Adversarial Networks (GANs) have contributed
a highly effective adversarial learning procedure, but still suffer from stability issues. On the other hand, Conditional Variational Auto-Encoders
(CVAE) models provide a sound way of conditional modeling but suffer from mode-mixing issues. Therefore, recent work has turned back to
simple and stable regression models that are effective at generation but
give up on the sampling mechanism and the latent code representation.
We propose a novel and efficient stochastic regression approach with latent drop-out codes that combines the merits of both lines of research.
In addition, a new training objective increases coverage of the training
distribution leading to improvements over the state of the art in terms
of accuracy as well as diversity