资源论文A Convergence Rate Analysis for LogitBoost, MART and Their Variant

A Convergence Rate Analysis for LogitBoost, MART and Their Variant

2020-03-03 | |  62 |   46 |   0

Abstract

LogitBoost, MART and their variant can be viewed as additive tree regression using logistic loss and boosting style optimization. We analyze their convergence rates based on a new weak learnability formulation. We show that it has 图片.png rate when using gradient descent only, while a linear rate is achieved when using Newton descent. Moreover, introducing Newton descent when growing the trees, as LogitBoost does, leads to a faster linear rate. Empirical results on UCI datasets support our analysis.

上一篇:Optimal Budget Allocation: Theoretical Guarantee and Efficient Algorithm

下一篇:Coordinate-descent for learning orthogonal matrices through Givens rotations

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...