资源论文Normalized Cut Loss for Weakly-supervised CNN Segmentation

Normalized Cut Loss for Weakly-supervised CNN Segmentation

2019-10-22 | |  82 |   43 |   0

Abstract Most recent semantic segmentation methods train deep convolutional neural networks with fully annotated masks requiring pixel-accuracy for good quality training. Common weakly-supervised approaches generate full masks from partial input (e.g. scribbles or seeds) using standard interactive segmentation methods as preprocessing. But, errors in such masks result in poorer training since standard loss functions (e.g. cross-entropy) do not distinguish seeds from potentially mislabeled other pixels. Inspired by the general ideas in semi-supervised learning, we address these problems via a new principled loss function evaluating network output with criteria standard in “shallow” segmentation, e.g. normalized cut. Unlike prior work, the cross entropy part of our loss evaluates only seeds where labels are known while normalized cut softly evaluates consistency of all pixels. We focus on normalized cut loss where dense Gaussian kernel is effificiently implemented in linear time by fast Bilateral fifiltering. Our normalized cut loss approach to segmentation brings the quality of weakly-supervised training signifificantly closer to fully supervised methods

上一篇:Learning 3D Shape Completion from Laser Scan Data with Weak Supervision

下一篇:Revisiting Dilated Convolution: A Simple Approach for Weakly- and SemiSupervised Semantic Segmentation

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...