Action Unit Detection with Region Adaptation, Multi-labeling Learning and
Optimal Temporal Fusing
Abstract
Action Unit (AU) detection becomes essential for facial analysis. Many proposed approaches face challenging
problems in dealing with the alignments of different face regions, in the effective fusion of temporal information, and in
training a model for multiple AU labels. To better address
these problems, we propose a deep learning framework for
AU detection with region of interest (ROI) adaptation, integrated multi-label learning, and optimal LSTM-based temporal fusing. First, ROI cropping nets (ROI Nets) are designed to make sure specifically interested regions of faces
are learned independently; each sub-region has a local convolutional neural network (CNN) - an ROI Net, whose convolutional filters will only be trained for the corresponding region. Second, multi-label learning is employed to integrate the outputs of those individual ROI cropping nets,
which learns the inter-relationships of various AUs and acquires global features across sub-regions for AU detection.
Finally, the optimal selection of multiple LSTM layers to
form the best LSTM Net is carried out to best fuse temporal
features, in order to make the AU prediction the most accurate. The proposed approach is evaluated on two popular
AU detection datasets, BP4D and DISFA, outperforming the
state of the art significantly, with an average improvement
of around 13% on BP4D and 25% on DISFA, respectively.