资源论文Watching Unlabeled Video Helps Learn New Human Actions from Very Few Labeled Snapshots

Watching Unlabeled Video Helps Learn New Human Actions from Very Few Labeled Snapshots

2019-12-10 | |  99 |   99 |   0

Abstract

We propose an approach to learn action categories from static images that leverages prior observations of generic human motion to augment its training process. Using unlabeled video containing various human activities, the system fifirst learns how body pose tends to change locally in time. Then, given a small number of labeled static images, it uses that model to extrapolate beyond the given exemplars and generate synthetictraining examplesposes that could link the observed images and/or immediately precede or follow them in time. In this way, we expand the training set without requiring additional manually labeled examples. We explore both example-based and manifold-based methods to implement our idea. Applying our approach to recognize actions in both images and video, we show it enhances a state-of-the-art technique when very few labeled training examples are available.

上一篇:Designing Category-Level Attributes for Discriminative Visual Recognition?

下一篇:Video Editing with Temporal, Spatial and Appearance Consistency

用户评价
全部评价

热门资源

  • Deep Cross-media ...

    Cross-media retrieval is a research hotspot in ...

  • Regularizing RNNs...

    Recently, caption generation with an encoder-de...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Visual Reinforcem...

    For an autonomous agent to fulfill a wide range...

  • Joint Pose and Ex...

    Facial expression recognition (FER) is a challe...