Abstract
We present a novel view on the indoor visual localization problem, where we avoid the use of interest points and associated descriptors, which are the basic building blocks of most standard methods. Instead, localization is cast as an alignment problem of the edges of the query image to a 3D model consisting of line segments. The proposed strategy is effective in low-textured indoor environments and in very wide baseline setups as it overcomes the dependency of image descriptors on textures, as well as their limited invariance to view point changes. The basic features of our method, which are prevalent indoors, are line segments. As we will show, they allow for defifining an effificient Chamfer distance-based aligning cost, computed through integral contour images, incorporated into a fifirst-best-search strategy. Experiments confifirm the effectiveness of the method in terms of both, accuracy and computational complexity