资源论文Margin-Based Generalization Lower Bounds for Boosted Classifiers

Margin-Based Generalization Lower Bounds for Boosted Classifiers

2020-02-19 | |  50 |   37 |   0

Abstract

Boosting is one of the most successful ideas in machine learning. The most wellaccepted explanations for the low generalization error of boosting algorithms such as AdaBoost stem from margin theory. The study of margins in the context of boosting algorithms was initiated by Schapire, Freund, Bartlett and Lee (1998) and has inspired numerous boosting algorithms and generalization bounds. To date, the strongest known generalization (upper bound) is the kth margin bound of Gao and Zhou (2013). Despite the numerous generalization upper bounds that have been proved over the last two decades, nothing is known about the tightness of these bounds. In this paper, we give the first margin-based lower bounds on the generalization error of boosted classifiers. Our lower bounds nearly match the kth margin bound and thus almost settle the generalization performance of boosted classifiers in terms of margins.

上一篇:On the Accuracy of Influence Functions for Measuring Group Effects

下一篇:Certi?ed Adversarial Robustness with Additive Noise

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Joint Pose and Ex...

    Facial expression recognition (FER) is a challe...