Abstract
Naive Bayes Nearest Neighbor (NBNN) is a feature-based image clas- sifier that achieves impressive degree of accuracy [1] by exploiting ‘Image-to- Class’ distances and by avoiding quantization of local image descriptors. It is based on the hypothesis that each local descriptor is drawn from a class-dependent probability measure. The density of the latter is estimated by the non-parametric kernel estimator, which is further simpli fied under the assumption that the nor- malization factor is class-independent. While leading to signi ficant simpli fication, the assumption underlying the original NBNN is too restrictive and considerably degrades its generalization ability. The goal of this paper is to address this issue. As we relax the incriminated assumption we are faced with a parameter se- lection problem that we solve by hinge-loss minimization. We also show that our modi fied formulation naturally generalizes to optimal combinations of fea- ture types. Experiments conducted on several datasets show that the gain over the original NBNN may attain up to 20 percentage points. We also take advantage of the linearity of optimal NBNN to perform classi fication by detection through efficient sub-window search [2], with yet another performance gain. As a result, our classi fier outperforms — in terms of misclassi fication error — methods based on support vector machine and bags of quantized features on some datasets.