Abstract
The availability of commodity depth sensors such as Kinect has enabled development of methods which can densely reconstruct arbi- trary scenes. While the results of these methods are accurate and visually appealing, they are quite often incomplete. This is either due to the fact that only part of the space was visible during the data capture process or due to the surfaces being occluded by other ob jects in the scene. In this paper, we address the problem of completing and refining such reconstruc- tions. We propose a method for scene completion that can infer the layout of the complete room and the full extent of partially occluded ob jects. We propose a new probabilistic model, Contour Completion Random Fields, that allows us to complete the boundaries of occluded surfaces. We evalu- ate our method on synthetic and real world reconstructions of 3D scenes and show that it quantitatively and qualitatively outperforms standard methods. We created a large dataset of partial and complete reconstruc- tions which we will make available to the community as a benchmark for the scene completion task. Finally, we demonstrate the practical util- ity of our algorithm via an augmented-reality application where ob jects interact with the completed reconstructions inferred by our method.