资源论文Generalization Bounds for Learning Kernels

Generalization Bounds for Learning Kernels

2020-02-26 | |  50 |   39 |   0

Abstract

This paper presents several novel generalization bounds for the problem of learning kernels based on a combinatorial analysis of the Rademacher complexity of the corresponding hypothesis sets. Our bound for learning kernels with a convex combination of p base kernels using 图片.png regularization admits only a 图片.png dependency on the number of kernels, which is tight and considerably more favorable than the previous best bound given for the same problem. We also give a novel bound for learning with a non-negative combination of p base kernels with an 图片.png regularization whose dependency on p is also tight and only in 图片.png . We present similar results for 图片.png regularization with other values of q, and outline the rel evance of our proof techniques to the analysis of the complexity of the class of linear functions. Experiments with a large number of kernels further validate the behavior of the generalization error as a function of p predicted by our bounds.

上一篇:A Conditional Random Field for Multiple-Instance Learning

下一篇:Graded Multilabel Classification: The Ordinal Case

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...