资源论文CURRICULUM LOSS :ROBUST LEARNING AND GEN -ERALIZATION AGAINST LABEL CORRUPTION

CURRICULUM LOSS :ROBUST LEARNING AND GEN -ERALIZATION AGAINST LABEL CORRUPTION

2020-01-02 | |  35 |   34 |   0

Abstract

Deep neural networks (DNNs) have great expressive power, which can even memorize samples with wrong labels. It is vitally important to reiterate robustness and generalization in DNNs against label corruption. To this end, this paper studies the 0-1 loss, which has a monotonic relationship with empirical adversary (reweighted) risk (Hu et al., 2018). Although the 0-1 loss is robust to outliers, it is also difficult to optimize. To efficiently optimize the 0-1 loss while keeping its robust properties, we propose a very simple and efficient loss, i.e. curriculum loss (CL). Our CL is a tighter upper bound of the 0-1 loss compared with conventional summation based surrogate losses. Moreover, CL can adaptively select samples for stagewise training. As a result, our loss can be deemed as a novel perspective of curriculum sample selection strategy, which bridges a connection between curriculum learning and robust learning. Experimental results on noisy MNIST, CIFAR10 and CIFAR100 dataset validate the robustness of the proposed loss.

上一篇:LARGE BATCH OPTIMIZATION FOR DEEP LEARNING :T RAINING BERT IN 76 MINUTES

下一篇:COHERENT GRADIENTS :A NA PPROACH TOU NDERSTANDING GENERALIZATIONIN GRADIENT DESCENT- BASED OPTIMIZATION

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...