资源论文Multi-Task Sparse Learning with Beta Process Prior for Action Recognition

Multi-Task Sparse Learning with Beta Process Prior for Action Recognition

2019-11-27 | |  54 |   36 |   0
Abstract In this paper, we formulate human action recognition as a novel Multi-Task Sparse Learning(MTSL) framework which aims to construct a test sample with multiple features from as few bases as possible. Learning the sparse representation under each feature modality is considered as a single task in MTSL. Since the tasks are generated from multiple features associated with the same visual input, they are not independent but inter-related. We introduce a Beta process(BP) prior to the hierarchical MTSL model, which e?ciently learns a compact dictionary and infers the sparse structure shared across all the tasks. The MTSL model enforces the robustness in coe?cient estimation compared with performing each task independently. Besides, the sparseness is achieved via the Beta process formulation rather than the computationally expensive l1 norm penalty. In terms of non-informative gamma hyper-priors, the sparsity level is totally decided by the data. Finally, the learning problem is solved by Gibbs sampling inference which estimates the full posterior on the model parameters. Experimental results on the KTH and UCF sports datasets demonstrate the e?ectiveness of the proposed MTSL approach for action recognition.

上一篇:An approach to pose-based action recognition

下一篇:Uncalibrated Photometric Stereo for Unknown Isotropic Reflectances

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...