资源论文Emotion Recognition from Arbitrary View Facial Images

Emotion Recognition from Arbitrary View Facial Images

2020-03-31 | |  100 |   56 |   0

Abstract

Emotion recognition from facial images is a very active re- search topic in human computer interaction (HCI). However, most of the previous approaches only focus on the frontal or nearly frontal view facial images. In contrast to the frontal/nearly-frontal view images, emo- tion recognition from non-frontal view or even arbitrary view facial im- ages is much more difficult yet of more practical utility. To handle the emotion recognition problem from arbitrary view facial images, in this paper we propose a novel method based on the regional covariance matrix (RCM) representation of facial images. We also develop a new discrimi- nant analysis theory, aiming at reducing the dimensionality of the facial feature vectors while preserving the most discriminative information, by minimizing an estimated multiclass Bayes error derived under the Gaus- sian mixture model (GMM). We further propose an efficient algorithm to solve the optimal discriminant vectors of the proposed discriminant anal- ysis method. We render thousands of multi-view 2D facial images from the BU-3DFE database and conduct extensive experiments on the gener- ated database to demonstrate the effectiveness of the proposed method. It is worth noting that our method does not require face alignment or facial landmark points localization, making it very attractive.

上一篇:Discriminative Spatial Attention for Robust Tracking

下一篇:A Local Bag-of-Features Model for Large-Scale Ob ject Retrieval

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...