资源论文Optimal Binary Classifier Aggregation for General Losses

Optimal Binary Classifier Aggregation for General Losses

2020-02-05 | |  48 |   41 |   0

Abstract 

We address the problem of aggregating an ensemble of predictors with known loss bounds in a semi-supervised binary classification setting, to minimize prediction loss incurred on the unlabeled data. We find the minimax optimal predictions for a very general class of loss functions including all convex and many non-convex losses, extending a recent analysis of the problem for misclassification error. The result is a family of semi-supervised ensemble aggregation algorithms which are as efficient as linear learning by convex optimization, but are minimax optimal without any relaxations. Their decision rules take a form familiar in decision theory – applying sigmoid functions to a notion of ensemble margin – without the assumptions typically made in margin-based learning.

上一篇:Can Active Memory Replace Attention?

下一篇:An Ensemble Diversity Approach to Supervised Binary Hashing

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...