资源论文Object-based Multiple Foreground Video Co-segmentation

Object-based Multiple Foreground Video Co-segmentation

2019-12-16 | |  47 |   46 |   0

Abstract

We present a video co-segmentation method that usescategory-independent object proposals as its basic elementand can extract multiple foreground objects in a video set.The use of object elements overcomes limitations of low-level feature representations in separating complex fore-grounds and backgrounds. We formulate object-based co-segmentation as a co-selection graph in which regions withforeground-like characteristics are favored while also ac-counting for intra-video and inter-video foreground coher-ence. To handle multiple foreground objects, we expand theco-selection graph model into a proposed multi-state selection graph model (MSG) that optimizes the segmentations of different objects jointly. This extension into the MSG can be applied not only to our co-selection graph, but also can be used to turn any standard graph model into a multi-state selection solution that can be optimized directly by the existing energy minimization techniques. Our experiments show that our object-based multiple foreground videoco-segmentation method (ObMiC) compares well to related techniques on both single and multiple foreground cases.

上一篇:Second-Order Shape Optimization for Geometric Inverse Problems in Vision

下一篇:Birdsnap: Large-scale Fine-grained Visual Categorization of Birds

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...