资源论文Learning in Reproducing Kernel Krein Spaces

Learning in Reproducing Kernel Krein Spaces

2020-03-16 | |  51 |   29 |   0

Abstract

We formulate a novel regularized risk minimization problem for learning in reproducing kernel Krein spaces and show that the strong representer theorem applies to it. As a result of the latter, learning problem can be expressed as the minimization of a quadratic form over a hypersphere of constant radius. We present an algorithm that can find a globally optimal solution to this nonconvex optimization problem in time cubic in the number of instances. Moreover, we derive the gradient of the solution with respect to its hyper parameters and, in this way, provide means for efficient hyperparameter tuning. The approach comes with a generalization bound expressed in terms of the Rademacher complexity of the corresponding hypothesis space. The major advantage over standard kernel methods is the ability to lea with various domain specific similarity measures for which positive definiteness does not hold or i difficult to establish. The approach is evaluated empirically using indefinite kernels defined on structured as well as vectorial data. The empirica results demonstrate a superior performance of our approach over the state-of-the-art baselines.

上一篇:Semi-Supervised Learning on Data Streams via Temporal Label Propagation

下一篇:Deep Bayesian Nonparametric Tracking

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...