资源论文Learning with Average Top-k Loss

Learning with Average Top-k Loss

2020-02-10 | |  79 |   44 |   0

Abstract

 In this work, we introduce the average top-image.png loss as a new aggregate loss for supervised learning, which is the average over the k largest individual losses over a training dataset. We show that the image.png loss is a natural generalization of the two widely used aggregate losses, namely the average loss and the maximum loss, but can combine their advantages and mitigate their drawbacks to better adapt to different data distributions. Furthermore, it remains a convex function over all individual losses, which can lead to convex optimization problems that can be solved effectively with conventional gradient-based methods. We provide an intuitive interpretation of the image.png loss based on its equivalent effect on the continuous individual loss functions, suggesting that it can reduce the penalty on correctly classified data. We further give a learning theory analysis of Mimage.png learning on the classification calibration of the image.png loss and the error bounds of image.png -SVM. We demonstrate the applicability of minimum average top-k learning for binary classification and regression using synthetic and real datasets.

上一篇:MarrNet: 3D Shape Reconstruction via 2.5D Sketches

下一篇:Streaming Robust Submodular Maximization: A Partitioned Thresholding Approach

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...