Abstract Typical graph-theoretic approaches for semisupervised classifification infer labels of unlabeled instances with the help of graph Laplacians. Founded on the spectral decomposition of the graph Laplacian, this paper learns a kernel matrix via minimizing the leave-one-out classifification error on the labeled instances. To this end, an effificient algorithm is presented based on linear programming, resulting in a transductive spectral kernel. The idea of our algorithm stems from regularization methodology and also has a nice interpretation in terms of spectral clustering. A simple classififier can be readily built upon the learned kernel, which suffifices to give prediction for any data point aside from those in the available dataset. Besides this usage, the spectral kernel can be effectively used in tandem with conventional kernel machines such as SVMs. We demonstrate the effificacy of the proposed algorithm through experiments carried out on challenging classifification tasks