Abstract
Multiple kernel learning for feature selection (MKLFS) utilizes kernels to explore complex properties
of features and performs better in embedded methods. However, the kernels in MKL-FS are generally
limited to be positive definite. In fact, indefinite
kernels often emerge in actual applications and can
achieve better empirical performance. But due to the
non-convexity of indefinite kernels, existing MKLFS methods are usually inapplicable and the corresponding research is also relatively little. In this
paper, we propose a novel multiple indefinite kernel
feature selection method (MIK-FS) based on the primal framework of indefinite kernel support vector
machine (IKSVM), which applies an indefinite base
kernel for each feature and then exerts an l1-norm
constraint on kernel combination coefficients to select features automatically. A two-stage algorithm
is further presented to optimize the coefficients of
IKSVM and kernel combination alternately. In the
algorithm, we reformulate the non-convex optimization problem of primal IKSVM as a difference of
convex functions (DC) programming and transform
the non-convex problem into a convex one with the
affine minorization approximation. Experiments
on real-world datasets demonstrate that MIK-FS is
superior to some related state-of-the-art methods
in both feature selection and classification performance