资源论文Activities as Time Series of Human Postures

Activities as Time Series of Human Postures

2020-03-31 | |  60 |   42 |   0

Abstract

This paper presents an exemplar-based approach to detecting and lo- calizing human actions, such as running, cycling, and swinging, in realistic videos with dynamic backgrounds. We show that such activities can be compactly rep- resented as time series of a few snapshots of human-body parts in their most dis- criminative postures, relative to other activity classes. This enables our approach to efficiently store multiple diverse exemplars per activity class, and quickly re- trieve exemplars that best match the query by aligning their short time-series representations. Given a set of example videos of all activity classes, we extract multiscale regions from all their frames, and then learn a sparse dictionary of most discriminative regions. The Viterbi algorithm is then used to track detec- tions of the learned codewords across frames of each video, resulting in their compact time-series representations. Dictionary learning is cast within the large- margin framework, wherein we study the effects of (cid:2)1 and (cid:2)2 regularization on the sparseness of the resulting dictionaries. Our experiments demonstrate robustness and scalability of our approach on challenging YouTube videos.

上一篇:Photometric Stereo from Maximum Feasible Lambertian Reflections

下一篇:Ob ject of Interest Detection by Saliency Learning

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...