Abstract
In this paper, we develop a novel framework for 3D tracking of the non-rigid face deformation from a single camera. The difficulty of the problem lies in the fact that 3D deformation parameter estimation becomes unstable when there are few reliable facial features correspon- dences. Unfortunately, this often occurs in real tracking scenario when there is significant illumination change, motion blur or large pose vari- ation. In order to extract more information of feature correspondences, the proposed framework integrates three types of features which discrim- inate face deformation across different views: 1) the semantic features which provide constant correspondences between 3D model points and ma jor facial features; 2) the silhouette features which provide dynamic correspondences between 3D model points and facial silhouette under varying views; 3) the online tracking features that provide redundant correspondences between 3D model points and salient image features. The integration of these complementary features is important for robust estimation of the 3D parameters. In order to estimate the high dimen- sional 3D deformation parameters, we develop a hierarchical parameter estimation algorithm to robustly estimate both rigid and non-rigid 3D parameters. We show the importance of both features fusion and hier- archical parameter estimation for reliable tracking 3D face deformation. Experiments demonstrate the robustness and accuracy of the proposed algorithm especially in the cases of agile head motion, drastic illumina- tion change, and large pose change up to profile view.