资源论文Delving into Egocentric Actions

Delving into Egocentric Actions

2019-12-17 | |  96 |   47 |   0

Abstract

We address the challenging problem of recognizing the camera wearer’s actions from videos captured by an egocentric camera. Egocentric videos encode a rich set of signals regarding the camera wearer, including head movement, hand pose and gaze information. We propose to utilize these mid-level egocentric cues for egocentric action recognition. We present a novel set of egocentric features and show how they can be combined with motion and object features. The result is a compact representation with superior performance. In addition, we provide the fifirst systematic evaluation of motion, object and egocentric cues in egocentric action recognition. Our benchmark leads to several surprising fifindings. These fifindings uncover the best practices for egocentric actions, with a signifificant performance boost over all previous state-of-the-art methods on three publicly available datasets.

上一篇:Deep Transfer Metric Learning

下一篇:Beyond Spatial Pooling: Fine-Grained Representation Learning in Multiple Domains

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...