Abstract
We devise a cascade GAN approach to generate talking face video, which is robust to different face shapes,
view angles, facial characteristics, and noisy audio conditions. Instead of learning a direct mapping from audio to video frames, we propose first to transfer audio to
high-level structure, i.e., the facial landmarks, and then
to generate video frames conditioned on the landmarks.
Compared to a direct audio-to-image approach, our cascade approach avoids fitting spurious correlations between
audiovisual signals that are irrelevant to the speech content. We, humans, are sensitive to temporal discontinuities and subtle artifacts in video. To avoid those pixel
jittering problems and to enforce the network to focus on
audiovisual-correlated regions, we propose a novel dynamically adjustable pixel-wise loss with an attention mechanism. Furthermore, to generate a sharper image with
well-synchronized facial movements, we propose a novel
regression-based discriminator structure, which considers
sequence-level information along with frame-level information. Thoughtful experiments on several datasets and realworld samples demonstrate significantly better results obtained by our method than the state-of-the-art methods in
both quantitative and qualitative comparisons.