Abstract
We propose a framework based on Generative Adversarial Networks to disentangle the identity and attributes
of faces, such that we can conveniently recombine different identities and attributes for identity preserving face
synthesis in open domains. Previous identity preserving
face synthesis processes are largely confined to synthesizing faces with known identities that are already in the training dataset. To synthesize a face with identity outside the
training dataset, our framework requires one input image
of that subject to produce an identity vector, and any other input face image to extract an attribute vector capturing,
e.g., pose, emotion, illumination, and even the background.
We then recombine the identity vector and the attribute vector to synthesize a new face of the subject with the extracted
attribute. Our proposed framework does not need to annotate the attributes of faces in any way. It is trained with an
asymmetric loss function to better preserve the identity and
stabilize the training process. It can also effectively leverage large amounts of unlabeled training face images to further improve the fidelity of the synthesized faces for subjects
that are not presented in the labeled training face dataset.
Our experiments demonstrate the efficacy of the proposed
framework. We also present its usage in a much broader set
of applications including face frontalization, face attribute
morphing, and face adversarial example detection