资源论文A Hierarchical Representation for Future Action Prediction

A Hierarchical Representation for Future Action Prediction

2020-04-07 | |  67 |   53 |   0

Abstract

We consider inferring the future actions of people from a still image or a short video clip. Predicting future actions before they are actually executed is a critical ingredient for enabling us to effectively interact with other humans on a daily basis. However, challenges are two fold: First, we need to capture the subtle details inherent in human movements that may imply a future action; second, predictions usually should be carried out as quickly as possible in the social world, when limited prior observations are available. In this paper, we propose hierarchical movemes - a new representa- tion to describe human movements at multiple levels of granularities, ranging from atomic movements (e.g. an open arm) to coarser move- ments that cover a larger temporal extent. We develop a max-margin learning framework for future action prediction, integrating a collection of moveme detectors in a hierarchical way. We validate our method on two publicly available datasets and show that it achieves very promising performance.

上一篇:Blind Deblurring Using Internal Patch Recurrence

下一篇:A Multi-transformational Model for Background Subtraction with Moving Cameras*

用户评价
全部评价

热门资源

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...