资源论文REGULARIZING ACTIVATIONS IN NEURAL NETWORKSVIA DISTRIBUTION MATCHING WITH THE WASSER -STEIN METRIC

REGULARIZING ACTIVATIONS IN NEURAL NETWORKSVIA DISTRIBUTION MATCHING WITH THE WASSER -STEIN METRIC

2019-12-30 | |  61 |   43 |   0

Abstract

Regularization and normalization have become indispensable components in training deep neural networks, resulting in faster training and improved generalization performance. We propose the projected error function regularization loss (PER) that encourages activations to follow the standard normal distribution. PER randomly projects activations onto one-dimensional space and computes the regularization loss in the projected space. PER is similar to the Pseudo-Huber loss in the projected space, thus taking advantage of both L1 and L2 regularization losses. Besides, PER can capture the interaction between hidden units by projection vector drawn from a unit sphere. By doing so, PER minimizes the upper bound of the Wasserstein distance of order one between an empirical distribution of activations and the standard normal distribution. To the best of the authors’ knowledge, this is the first work to regularize activations via distribution matching in the probability distribution space. We evaluate the proposed method on the image classification task and the word-level language modeling task.

上一篇:PROX -SGD: TRAINING STRUCTURED NEURAL NET-WORKS UNDER REGULARIZATION AND CONSTRAINTS

下一篇:PROBABILISTIC CONNECTION IMPORTANCE INFER -ENCE AND LOSSLESS COMPRESSION OF DEEP NEU -RAL NETWORKS

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...