资源论文Patch to the Future: Unsupervised Visual Prediction

Patch to the Future: Unsupervised Visual Prediction

2019-12-17 | |  80 |   40 |   0

Abstract

In this paper we present a conceptually simple but surprisingly powerful method for visual prediction which combines the effectiveness of mid-level visual elements with temporal modeling. Our framework can be learned in a completely unsupervised manner from a large collection of videos. However, more importantly, because our approach models the prediction framework on these mid-level elements, we can not only predict the possible motion in the scene but also predict visual appearances how are appearances going to change with time. This yields a visual hallucinationof probable events on top of the scene. We show that our method is able to accurately predict and visualize simple future events; we also show that our approach is comparable to supervised methods for event prediction

上一篇:Weakly Supervised Multiclass Video Segmentation

下一篇:Unsupervised One-Class Learning for Automatic Outlier Removal

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...