资源论文UNDERSTANDING WHY NEURAL NETWORKS GENER -ALIZE WELL THROUGH GSNR OF PARAMETERS

UNDERSTANDING WHY NEURAL NETWORKS GENER -ALIZE WELL THROUGH GSNR OF PARAMETERS

2019-12-30 | |  72 |   46 |   0

Abstract

As deep neural networks (DNNs) achieve tremendous success across many applica2 tion domains, researchers tried to explore in many aspects on why they generalize 3 well. In this paper, we provide a novel perspective on these issues using the gradient 4 signal to noise ratio (GSNR) of parameters during training process of DNNs. The 5 GSNR of a parameter is defined as the ratio between its gradient’s squared mean and 6 variance, over the data distribution. Based on several approximations, we establish 7 a quantitative relationship between model parameters’ GSNR and the generaliza8 tion gap. This relationship indicates that larger GSNR during training process leads 9 to better generalization performance. Moreover, we show that, different from that10 of shallow models (e.g. logistic regression, support vector machines), the gradient11 descent optimization dynamics of DNNs naturally produces large GSNR during12 training, which is probably the key to DNNs’ remarkable generalization ability.

上一篇:DEFENDING AGAINST PHYSICALLY REALIZABLE AT-TACKS ON IMAGE CLASSIFICATION

下一篇:THE LOCAL ELASTICITY OF NEURAL NETWORKS

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...