资源论文BubbleNets: Learning to Select the Guidance Frame in Video Object Segmentation by Deep Sorting Frames

BubbleNets: Learning to Select the Guidance Frame in Video Object Segmentation by Deep Sorting Frames

2019-09-10 | |  65 |   44 |   0

Abstract Semi-supervised video object segmentation has made signifificant progress on real and challenging videos in recent years. The current paradigm for segmentation methods and benchmark datasets is to segment objects in video provided a single annotation in the fifirst frame. However, we fifind that segmentation performance across the entire video varies dramatically when selecting an alternative frame for annotation. This paper addresses the problem of learning to suggest the single best frame across the video for user annotation—this is, in fact, never the fifirst frame of video. We achieve this by introducing BubbleNets, a novel deep sorting network that learns to select frames using a performance-based loss function that enables the conversion of expansive amounts of training examples from already existing datasets. Using BubbleNets, we are able to achieve an 11% relative improvement in segmentation performance on the DAVIS benchmark without any changes to the underlying method of segmentation.

上一篇:Dual Attention Network for Scene Segmentation

下一篇:Knowledge Adaptation for Efficient Semantic Segmentation

用户评价
全部评价

热门资源

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...