资源论文View and Style-Independent Action Manifolds for Human Activity Recognition

View and Style-Independent Action Manifolds for Human Activity Recognition

2020-03-31 | |  64 |   37 |   0

Abstract

We introduce a novel approach to automatically learn intu- itive and compact descriptors of human body motions for activity recog- nition. Each action descriptor is produced, first, by applying Temporal Laplacian Eigenmaps to view-dependent videos in order to produce a stylistic invariant embedded manifold for each view separately. Then, all view-dependent manifolds are automatically combined to discover a unified representation which model in a single three dimensional space an action independently from style and viewpoint. In addition, a bidi- rectional nonlinear mapping function is incorporated to allow pro jecting actions between original and embedded spaces. The proposed framework is evaluated on a real and challenging dataset (IXMAS), which is com- posed of a variety of actions seen from arbitrary viewpoints. Experimen- tal results demonstrate robustness against style and view variation and match the most accurate action recognition method.

上一篇:Fast Optimization for Mixture Prior Models

下一篇:Texture Regimes for Entropy-Based Multiscale Image Analysis

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...