Abstract
Conventional approaches to relevance feedback in content- based image retrieval are based on the assumption that relevant images are physically close to the query image, or the query regions can be identified by a set of clustering centers. However, semantically related images are often scattered across the visual space. It is not always reli- able that the refined query point or the clustering centers are capable of representing a complex query region. In this work, we propose a novel relevance feedback approach which directly aims at extracting a set of samples to represent the query re- gion, regardless of its underlying shape. The sample set extracted by our method is competent as well as compact for subsequent retrieval. Moreover, we integrate feature re-weighting in the process to estimate the importance of each image descriptor. Unlike most existing relevance feedback approaches in which all query points share a same feature weight distribution, our method re-weights the feature importance for each rel- evant image respectively, so that the representative and discriminative ability for all the images can be maximized. Experimental results on two databases show the effectiveness of our approach.