资源论文Learning a Non-linear Knowledge Transfer Model for Cross-View Action Recognition

Learning a Non-linear Knowledge Transfer Model for Cross-View Action Recognition

2019-12-25 | |  50 |   39 |   0

Abstract

This paper concerns action recognition from unseenand unknown views. We propose unsupervised learningof a non-linear model that transfers knowledge from mul-tiple views to a canonical view. The proposed Non-linearKnowledge Transfer Model (NKTM) is a deep network, withweight decay and sparsity constraints, which finds a sharedhigh-level virtual path from videos captured from different unknown viewpoints to the same canonical view. The strength of our technique is that we learn a single NKTM for all actions and all camera viewing directions. Thus, NKTM does not require action labels during learning and knowledge of the camera viewpoints during training or testing. NKTM is learned once only from dense trajectories of syn-thetic points fitted to mocap data and then applied to real video data. Trajectories are coded with a general codebook learned from the same mocap data. NKTM is scalable to new action classes and training data as it does not require re-learning. Experiments on the IXMAS and N-UCLA datasets show that NKTM outperforms existing state-of-theart methods for cross-view action recognition.

上一篇:Model Recommendation: Generating Object Detectors from Few Samples

下一篇:3D Model-Based Continuous Emotion Recognition

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...