Abstract
This paper proposes a deep neural network (DNN) for
piece-wise planar depthmap reconstruction from a single RGB image. While DNNs have brought remarkable
progress to single-image depth prediction, piece-wise planar depthmap reconstruction requires a structured geometry representation, and has been a difficult task to master
even for DNNs. The proposed end-to-end DNN learns to
directly infer a set of plane parameters and corresponding plane segmentation masks from a single RGB image.
We have generated more than 50,000 piece-wise planar
depthmaps for training and testing from ScanNet, a largescale RGBD video database. Our qualitative and quantitative evaluations demonstrate that the proposed approach
outperforms baseline methods in terms of both plane segmentation and depth estimation accuracy. To the best of our
knowledge, this paper presents the first end-to-end neural
architecture for piece-wise planar reconstruction from a single RGB image. Code and data are available at https:
//github.com/art-programmer/PlaneNet