资源论文First-Person Activity Recognition: What Are They Doing to Me

First-Person Activity Recognition: What Are They Doing to Me

2019-11-27 | |  61 |   46 |   0
Abstract This paper discusses the problem of recognizing interaction-level human activities from a ?rst-person viewpoint. The goal is to enable an observer (e.g., a robot or a wearable camera) to understand ‘what activity others are performing to it’ from continuous video inputs. These include friendly interactions such as ‘a person hugging the observer’ as well as hostile interactions like ‘punching the observer’ or ‘throwing objects to the observer’, whose videos involve a large amount of camera ego-motion caused by physical interactions. The paper investigates multichannel kernels to integrate global and local motion information, and presents a new activity learning/recognition methodology that explicitly considers temporal structures displayed in ?rst-person activity videos. In our experiments, we not only show classi?cation results with segmented videos, but also con?rm that our new approach is able to detect activities from continuous videos reliably.

上一篇:Dense Reconstruction Using 3D Object Shape Priors

下一篇:Fusing Robust Face Region Descriptors via Multiple Metric Learning for Face Recognition in the Wild

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...