Abstract
3D reconstruction pipelines using structure-from-motion and multi-view stereo techniques are today able to reconstruct impressive, large-scale geometry models from images but do not yield textured re- sults. Current texture creation methods are unable to handle the complexity and scale of these models. We therefore present the first comprehensive texturing framework for large-scale, real-world 3D recon- structions. Our method addresses most challenges occurring in such re- constructions: the large number of input images, their drastically varying properties such as image scale, (out-of-focus) blur, exposure variation, and occluders (e.g ., moving plants or pedestrians). Using the proposed technique, we are able to texture datasets that are several orders of mag- nitude larger and far more challenging than shown in related work.