资源论文Learning Temporal Regularity in Video Sequences

Learning Temporal Regularity in Video Sequences

2019-12-26 | |  37 |   40 |   0

Abstract

Perceiving meaningful activities in a long video sequence is a challenging problem due to ambiguous defini-tion of ‘meaningfulness’ as well as clutters in the scene. We approach this problem by learning a generative model for regular motion patterns (termed as regularity) using multiple sources with very limited supervision. Specifically, wepropose two methods that are built upon the autoencoders for their ability to work with little to no supervision. Wefirst leverage the conventional handcrafted spatio-temporal local features and learn a fully connected autoencoder on them. Second, we build a fully convolutional feed-forward autoencoder to learn both the local features and the classifiers as an end-to-end learning framework. Our model can capture the regularities from multiple datasets. We evaluate our methods in both qualitative and quantitative ways showing the learned regularity of videos in various aspects and demonstrating competitive performance on anomaly detection datasets as an application.

上一篇:We Are Humor Beings: Understanding and Predicting Visual Humor

下一篇:Learning Deep Representations of Fine-Grained Visual Descriptions

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...