资源论文The Coherent Loss Function for Classification

The Coherent Loss Function for Classification

2020-03-03 | |  49 |   50 |   0

Abstract

A prediction rule in binary classification that aims to achieve the lowest probability of misclassification involves minimizing over a nonconvex, 0-1 loss function, which is typically a computationally intractable optimization problem. To address the intractability, previous methods consider minimizing the cumulative loss – the sum of convex surrogates of the 0-1 loss of each sample. We revisit this paradigm and develop instead an axiomatic framework by proposing a set of salient properties on functions for bi nary classification and then propose the coherent loss approach, which is a tractable upper-bound of the empirical classification error over the entire sample set. We show that the proposed approach yields a strictly tighter approximation to the empirical classification error than any convex cumulative loss approach while preserving the convexity of the underlying optimization problem, and this approach for binary classification also has a robustness interpretation which builds a connection to robust SVMs.

上一篇:Nonmyopic ε-Bayes-Optimal Active Learning of Gaussian Processes

下一篇:Robust Learning under Uncertain Test Distributions: Relating Covariate Shift to Model Misspecification

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...