资源论文FEELVOS: Fast End-to-End Embedding Learning for Video Object Segmentation

FEELVOS: Fast End-to-End Embedding Learning for Video Object Segmentation

2019-09-11 | |  167 |   63 |   0

 Abstract Many of the recent successful methods for video object segmentation (VOS) are overly complicated, heavily rely on fifine-tuning on the fifirst frame, and/or are slow, and are hence of limited practical use. In this work, we propose FEELVOS as a simple and fast method which does not rely on fifine-tuning. In order to segment a video, for each frame FEELVOS uses a semantic pixel-wise embedding together with a global and a local matching mechanism to transfer information from the fifirst frame and from the previous frame of the video to the current frame. In contrast to previous work, our embedding is only used as an internal guidance of a convolutional network. Our novel dynamic segmentation head allows us to train the network, including the embedding, end-to-end for the multiple object segmentation task with a cross entropy loss. We achieve a new state of the art in video object segmentation without fifine-tuning with a J &F measure of 71.5% on the DAVIS 2017 validation set. We make our code and models available at https://github.com/tensorflow/ models/tree/master/research/feelvos.

上一篇:RVOS: End-to-End Recurrent Network for Video Object Segmentation

下一篇:ADVENT: Adversarial Entropy Minimization for Domain Adaptation in Semantic Segmentation

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...