资源论文A Simple Multi-Class Boosting Framework with Theoretical Guarantees and Empirical Proficiency

A Simple Multi-Class Boosting Framework with Theoretical Guarantees and Empirical Proficiency

2020-03-09 | |  68 |   40 |   0

Abstract

There is a need for simple yet accurate white-box learning systems that train quickly and with little data. To this end, we showcase REBEL, a multi-class boosting method, and present a novel family of weak learners called localized similarities. Our framework provably minimizes the training error of any dataset at an exponential rate. We carry out experiments on a variety of synthetic and real datasets, demonstrating a consistent tendency to avoid overfitting. We evaluate our method on MNIST and standard UCI datasets against other state-of-the-art methods, showing the empirical proficiency of our method.

上一篇:Large-Scale Evolution of Image Classifiers

下一篇:Dissipativity Theory for Nesterov’s Accelerated Method

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...