GANFIT: Generative Adversarial Network Fittingfor High Fidelity 3D Face Reconstruction
Abstract
In the past few years, a lot of work has been done towards reconstructing the 3D facial structure from single
images by capitalizing on the power of Deep Convolutional
Neural Networks (DCNNs). In the most recent works, differentiable renderers were employed in order to learn the relationship between the facial identity features and the parameters of a 3D morphable model for shape and texture. The
texture features either correspond to components of a linear texture space or are learned by auto-encoders directly
from in-the-wild images. In all cases, the quality of the facial texture reconstruction of the state-of-the-art methods is
still not capable of modeling textures in high fidelity. In this
paper, we take a radically different approach and harness
the power of Generative Adversarial Networks (GANs) and
DCNNs in order to reconstruct the facial texture and shape
from single images. That is, we utilize GANs to train a very
powerful generator of facial texture in UV space. Then, we
revisit the original 3D Morphable Models (3DMMs) fitting
approaches making use of non-linear optimization to find