资源论文Action Recognition with Trajectory-Pooled Deep-Convolutional Descriptors

Action Recognition with Trajectory-Pooled Deep-Convolutional Descriptors

2019-12-18 | |  50 |   49 |   0

Abstract

Visual features are of vital importance for human actionunderstanding in videos. This paper presents a new videorepresentation, called trajectory-pooled deep-convolutional descriptor (TDD), which shares the merits of both hand-crafted features [31] and deep-learned features [24].Specifically, we utilize deep architectures to learn discriminative convolutional feature maps, and conduct trajectoryconstrained pooling to aggregate these convolutional fea-tures into effective descriptors. To enhance the robustness of TDDs, we design two normalization methods to trans-form convolutional feature maps, namely spatiotemporal normalization and channel normalization. The advantages of our features come from (i) TDDs are automatically learned and contain high discriminative capacity compared with those hand-crafted features; (ii) TDDs take account of the intrinsic characteristics of temporal dimension and introduce the strategies of trajectory-constrained sampling and pooling for aggregating deep-learned features. Weconduct experiments on two challenging datasets: HMDB51 and UCF101. Experimental results show that TDDs outperform previous hand-crafted features [31] and deeplearned features [24]. Our method also achieves superior performance to the state of the art on these datasets 1 .

上一篇:Unconstrained Realtime Facial Performance Capture

下一篇:Segment Based 3D Object Shape Priors

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...