Abstract
Discriminative dictionary learning aims to learn a dictionary from training samples to enhance the discriminative capability of their coding vectors. Several discrimination terms have been proposed by as- sessing the prediction loss (e.g., logistic regression) or class separation criterion (e.g., Fisher discrimination criterion) on the coding vectors. In this paper, we provide a new insight on discriminative dictionary learn- ing. Specifically, we formulate the discrimination term as the weighted summation of the squared distances between all pairs of coding vectors. The discrimination term in the state-of-the-art Fisher discrimination dic- tionary learning (FDDL) method can be explained as a special case of our model, where the weights are simply determined by the numbers of sam- ples of each class. We then propose a parameterization method to adap- tively determine the weight of each coding vector pair, which leads to a support vector guided dictionary learning (SVGDL) model. Compared with FDDL, SVGDL can adaptively assign different weights to different pairs of coding vectors. More importantly, SVGDL automatically selects only a few critical pairs to assign non-zero weights, resulting in better generalization ability for pattern recognition tasks. The experimental re- sults on a series of benchmark databases show that SVGDL outperforms many state-of-the-art discriminative dictionary learning methods.