Abstract
The Generative Adversarial Networks (GANs) have
demonstrated impressive performance for data synthesis,
and are now used in a wide range of computer vision tasks.
In spite of this success, they gained a reputation for being difficult to train, what results in a time-consuming and
human-involved development process to use them.
We consider an alternative training process, named
SGAN, in which several adversarial “local” pairs of networks are trained independently so that a “global” supervising pair of networks can be trained against them.
The goal is to train the global pair with the corresponding ensemble opponent for improved performances in terms
of mode coverage. This approach aims at increasing the
chances that learning will not stop for the global pair, preventing both to be trapped in an unsatisfactory local minimum, or to face oscillations often observed in practice. To
guarantee the latter, the global pair never affects the local
ones.
The rules of SGAN training are thus as follows: the
global generator and discriminator are trained using the
local discriminators and generators, respectively, whereas
the local networks are trained with their fixed local opponent.
Experimental results on both toy and real-world problems demonstrate that this approach outperforms standard
training in terms of better mitigating mode collapse, stability while converging and that it surprisingly, increases the
convergence speed as well.