Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial
Network
Abstract
Despite the breakthroughs in accuracy and speed of
single image super-resolution using faster and deeper convolutional neural networks, one central problem remains
largely unsolved: how do we recover the finer texture details
when we super-resolve at large upscaling factors? The
behavior of optimization-based super-resolution methods is
principally driven by the choice of the objective function.
Recent work has largely focused on minimizing the mean
squared reconstruction error. The resulting estimates have
high peak signal-to-noise ratios, but they are often lacking
high-frequency details and are perceptually unsatisfying in
the sense that they fail to match the fidelity expected at
the higher resolution. In this paper, we present SRGAN,
a generative adversarial network (GAN) for image superresolution (SR). To our knowledge, it is the first framework
capable of inferring photo-realistic natural images for 4×
upscaling factors. To achieve this, we propose a perceptual
loss function which consists of an adversarial loss and a
content loss. The adversarial loss pushes our solution to
the natural image manifold using a discriminator network
that is trained to differentiate between the super-resolved
images and original photo-realistic images. In addition, we
use a content loss motivated by perceptual similarity instead
of similarity in pixel space. Our deep residual network
is able to recover photo-realistic textures from heavily
downsampled images on public benchmarks. An extensive
mean-opinion-score (MOS) test shows hugely significant
gains in perceptual quality using SRGAN. The MOS scores
obtained with SRGAN are closer to those of the original
high-resolution images than to those obtained with any
state-of-the-art method