资源论文STD2P: RGBD Semantic Segmentation using Spatio-Temporal Data-Driven Pooling

STD2P: RGBD Semantic Segmentation using Spatio-Temporal Data-Driven Pooling

2019-12-06 | |  43 |   30 |   0
Abstract We propose a novel superpixel-based multi-view convolutional neural network for semantic image segmentation. The proposed network produces a high quality segmentation of a single image by leveraging information from additional views of the same scene. Particularly in indoor videos such as captured by robotic platforms or handheld and bodyworn RGBD cameras, nearby video frames provide diverse viewpoints and additional context of objects and scenes. To leverage such information, we first compute region correspondences by optical flow and image boundary-based superpixels. Given these region correspondences, we propose a novel spatio-temporal pooling layer to aggregate information over space and time. We evaluate our approach on the NYU–Depth–V2 and the SUN3D datasets and compare it to various state-of-the-art single-view and multi-view approaches. Besides a general improvement over the stateof- the-art, we also show the benefits of making use of unlabeled frames during training for multi-view as well as single-view prediction.

上一篇:Spatial-Semantic Image Search by Visual Feature Synthesis

下一篇:Tracking by Natural Language Specification

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...