Abstract
We propose a novel superpixel-based multi-view convolutional neural network for semantic image segmentation.
The proposed network produces a high quality segmentation
of a single image by leveraging information from additional
views of the same scene. Particularly in indoor videos such
as captured by robotic platforms or handheld and bodyworn RGBD cameras, nearby video frames provide diverse
viewpoints and additional context of objects and scenes. To
leverage such information, we first compute region correspondences by optical flow and image boundary-based superpixels. Given these region correspondences, we propose
a novel spatio-temporal pooling layer to aggregate information over space and time. We evaluate our approach on
the NYU–Depth–V2 and the SUN3D datasets and compare
it to various state-of-the-art single-view and multi-view approaches. Besides a general improvement over the stateof- the-art, we also show the benefits of making use of unlabeled frames during training for multi-view as well as
single-view prediction.