资源论文RETHINKING SOFTMAX CROSS -E NTROPY LOSSFOR ADVERSARIAL ROBUSTNESS

RETHINKING SOFTMAX CROSS -E NTROPY LOSSFOR ADVERSARIAL ROBUSTNESS

2020-01-02 | |  44 |   33 |   0

Abstract

Previous work shows that adversarially robust generalization requires larger sample complexity, and the same dataset, e.g., CIFAR-10, which enables good standard accuracy may not suffice to train robust models. Since collecting new training data could be costly, we focus on better utilizing the given data by inducing the regions with high sample density in the feature space, which could lead to locally sufficient samples for robust learning. We first formally show that the softmax cross-entropy (SCE) loss and its variants convey inappropriate supervisory signals, which encourage the learned feature points to spread over the space sparsely in training. This inspires us to propose the Max-Mahalanobis center (MMC) loss to explicitly induce dense feature regions in order to benefit robustness. Namely, the MMC loss encourages the model to concentrate on learning ordered and compact representations, which gather around the preset optimal centers for different classes. We empirically demonstrate that applying the MMC loss can significantly improve robustness even under strong adaptive attacks, while keeping high accuracy on clean inputs comparable to the SCE loss with little extra computation.

上一篇:CONTRASTIVE LEARNING OF STRUCTUREDW ORLD MODELS

下一篇:LARGE BATCH OPTIMIZATION FOR DEEP LEARNING :T RAINING BERT IN 76 MINUTES

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...