StarGAN: Unified Generative Adversarial Networks
for Multi-Domain Image-to-Image Translation
Abstract
Recent studies have shown remarkable success in imageto-image translation for two domains. However, existing
approaches have limited scalability and robustness in handling more than two domains, since different models should
be built independently for every pair of image domains. To
address this limitation, we propose StarGAN, a novel and
scalable approach that can perform image-to-image translations for multiple domains using only a single model.
Such a unified model architecture of StarGAN allows simultaneous training of multiple datasets with different domains
within a single network. This leads to StarGAN’s superior
quality of translated images compared to existing models as
well as the novel capability of flexibly translating an input
image to any desired target domain. We empirically demonstrate the effectiveness of our approach on a facial attribute
transfer and a facial expression synthesis tasks.