Abstract
Parameterized Appearance Models (PAMs) such as Active Appearance Models (AAMs), Morphable Models and Boosted Appear- ance Models have been extensively used for face alignment. Broadly speaking, PAMs methods can be classified into generative and discrim- inative. Discriminative methods learn a mapping between appearance features and motion parameters (rigid and non-rigid). While discrim- inative approaches have some advantages (e.g., feature weighting, im- proved generalization), they suffer from two ma jor drawbacks: (1) they need large amounts of perturbed samples to train a regressor or clas- sifier, making the training process computationally expensive in space and time. (2) It is not practical to uniformly sample the space of motion parameters. In practice, there are regions of the motion space that are more densely sampled than others, resulting in biased models and lack of generalization. To solve these problems, this paper proposes a computa- tionally efficient continuous regressor that does not require the sampling stage. Experiments on real data show the improvement in memory and time requirements to train a discriminative appearance model, as well as improved generalization.