资源论文Anticipating Visual Representations from Unlabeled Video

Anticipating Visual Representations from Unlabeled Video

2019-12-26 | |  67 |   48 |   0

Abstract

Anticipating actions and objects before they start or ap-pear is a difficult problem in computer vision with severalreal-world applications. This task is challenging partly because it requires leveraging extensive knowledge of theworld that is difficult to write down. We believe that apromising resource for efficiently learning this knowledgeis through readily available unlabeled video. We present a framework that capitalizes on temporal structure in unlabeled video to learn to anticipate human actions and objects. The key idea behind our approach is that we can train deep networks to predict the visual representation of images in the future. Visual representations are a promising predic-tion target because they encode images at a higher semantic level than pixels yet are automatic to compute. We then apply recognition algorithms on our predicted representation to anticipate objects and actions. We experimentally validate this idea on two datasets, anticipating actions one second in the future and objects five seconds in the future.

上一篇:STCT: Sequentially Training Convolutional Networks for Visual Tracking

下一篇:Ask Me Anything: Free-form Visual Question Answering Based on Knowledge from External Sources

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...