Abstract
We present a method for learning feature descriptors us- ing multiple images, motivated by the problems of mobile robot nav- igation and localization. The technique uses the relative simplicity of small baseline tracking in image sequences to develop descriptors suit- able for the more challenging task of wide baseline matching across sig- nificant viewpoint changes. The variations in the appearance of each feature are learned using kernel principal component analysis (KPCA) over the course of image sequences. An approximate version of KPCA is applied to reduce the computational complexity of the algorithms and yield a compact representation. Our experiments demonstrate robustness to wide appearance variations on non-planar surfaces, including changes in illumination, viewpoint, scale, and geometry of the scene.