Abstract
Automatic facial action unit (AFA) detection from video is a long-standing problem in facial expression analysis. Most approaches emphasize choices of features and classi?ers. They neglect individual differences in target persons. People vary markedly in facial morphology (e.g., heavy versus delicate brows, smooth versus deeply etched wrinkles) and behavior. Individual differences can dramatically in?uence how well generic classi?ers generalize to previously unseen persons. While a possible solution would be to train person-speci?c classi?ers, that often is neither feasible nor theoretically compelling. The alternative that we propose is to personalize a generic classi?er in an unsupervised manner (no additional labels for the test subjects are required). We introduce a transductive learning method, which we refer to Selective Transfer Machine (STM), to personalize a generic classi?er by attenuating person-speci?c biases. STM achieves this effect by simultaneously learning a classi?er and re-weighting the training samples that are most relevant to the test subject. To evaluate the effectiveness of STM, we compared STM to generic classi?ers and to cross-domain learning methods in three major databases: CK+ [20], GEMEP-FERA [32] and RU-FACS [2]. STM outperformed generic classi?ers in all.