Abstract
We propose a new local learning scheme that is based on the principle of decisiveness: the learned classifier is expected to exhibit large variability in the direction of the test example. We show how this prin- ciple leads to optimization functions in which the regularization term is modified, rather than the empirical loss term as in most local learning schemes. We combine this local learning method with a Canonical Corre- lation Analysis based classification method, which is shown to be similar to multiclass LDA. Finally, we show that the classification function can be computed efficiently by reusing the results of previous computations. In a variety of experiments on new and existing data sets, we demon- strate the effectiveness of the CCA based classification method compared to SVM and Nearest Neighbor classifiers, and show that the newly pro- posed local learning method improves it even further, and outperforms conventional local learning schemes.