Abstract
In the matching tasks which form an integral part of all types of track- ing and geometrical vision, there are invariably priors available on the absolute and/or relative image locations of features of interest. Usually, these priors are used post-hoc in the process of resolving feature matches and obtaining final scene estimates, via ‘ first get candidate matches, then resolve’ consensus algo- rithms such as RANSAC. In this paper we show that the dramatically differ- ent approach of using priors dynamically to guide a feature by feature matching search can achieve global matching with much fewer image processing opera- tions and lower overall computational cost. Essentially, we put image processing into the loop of the search for global consensus. In particular, our approach is able to cope with signi ficant image ambiguity thanks to a dynamic mixture of Gaus- sians treatment. In our fully Bayesian algorithm, the choice of the most efficient search action at each step is guided intuitively and rigorously by expected Shan- non information gain. We demonstrate the algorithm in feature matching as part of a sequential SLAM system for 3D camera tracking. Robust, real-time matching can be achieved even in the previously unmanageable case of jerky, rapid motion necessitating weak motion modelling and large search regions.