资源论文The Fast Convergence of Boosting

The Fast Convergence of Boosting

2020-01-08 | |  79 |   40 |   0

Abstract

This manuscript considers the convergence rate of boosting under a large class of losses, including the exponential and logistic losses, where the best previous rate of convergence was 图片.png. First, it is established that the setting of weak learnability aids the entire class, granting a rate 图片.png Next, the (disjoint) conditions under which the infimal empirical risk is attainable are characterized in terms of the sample and weak learning class, and a new proof is given for the known rate 图片.png Finally, it is established that any instance can be decomposed into two smaller instances resembling the two preceding special cases, yielding a rate 图片.png, with a matching lower bound for the logistic loss. The principal technical hurdle throughout this work is the potential unattainability of the infimal empirical risk; the technique for overcoming this barrier may be of general interest.

上一篇:Continuous-Time Regression Models for Longitudinal Networks

下一篇:Robust Multi-Class Gaussian Process Classification

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...