Abstract
In this paper, we propose a method to refifine geometry of 3D meshes from the Kinect fusion by exploiting shading cues captured from the infrared (IR) camera of Kinect. A major benefifit of using the Kinect IR camera instead of a RGB camera is that the IR images captured by Kinect are narrow band images which fifiltered out most undesired ambient light that makes our system robust to natural indoor illumination. We defifine a near light IR shading model which describes the captured intensity as a function of surface normals, albedo, lighting direction, and distance between a light source and surface points. To resolve ambiguity in our model between normals and distance, we utilize an initial 3D mesh from the Kinect fusion and multi-view information to reliably estimate surface details that were not reconstructed by the Kinect fusion. Our approach directly operates on a 3D mesh model for geometry refifinement. The effectiveness of our approach is demonstrated through several challenging real-world examples.