资源论文Discovering the physical parts of an articulated object class from multiple videos

Discovering the physical parts of an articulated object class from multiple videos

2019-12-26 | |  46 |   30 |   0

Abstract

We propose a motion-based method to discover the physical parts of an articulated object class (e.g. head/torso/le of a horse) from multiple videos. The key is to find object r gions that exhibit consistent motion relative to the rest of object, across multiple videos. We can then learn a location model for the parts and segment them accurately in the individual videos using an energy function that also enforces temporal and spatial consistency in part motion. Unlike our approach, traditional methods for motion segmentation or non-rigid structure from motion operate on one video at a time. Hence they cannot discover a part unless it displays independent motion in that particular video. We evaluate our method on a new dataset of 32 videos of tigers and horses, where we significantly outperform a recent motion segmentation method on the task of part discovery (obtaining roughly twice the accuracy).

上一篇:Video Paragraph Captioning Using Hierarchical Recurrent Neural Networks

下一篇:Some like it hot - visual guidance for preference prediction

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...