资源论文The Impact of Unlabeled Patterns in Rademacher Complexity Theory for Kernel Classifiers

The Impact of Unlabeled Patterns in Rademacher Complexity Theory for Kernel Classifiers

2020-01-10 | |  78 |   56 |   0

Abstract

We derive here new generalization bounds, based on Rademacher Complexity theory, for model selection and error estimation of linear (kernel) classifiers, which exploit the availability of unlabeled samples. In particular, two results are obtained: the first one shows that, using the unlabeled samples, the confidence term of the conventional bound can be reduced by a factor of three; the second one shows that the unlabeled samples can be used to obtain much tighter bounds, by building localized versions of the hypothesis class containing the optimal classifier.

上一篇:Image Parsing via Stochastic Scene Grammar

下一篇:A Denoising View of Matrix Completion

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...