资源论文COMPRESSION BASED BOUND FOR NON -COMPRESSEDNETWORK :UNIFIED GENERALIZATION ERROR ANAL -YSIS OF LARGE COMPRESSIBLE DEEP NEURAL NET-WORK

COMPRESSION BASED BOUND FOR NON -COMPRESSEDNETWORK :UNIFIED GENERALIZATION ERROR ANAL -YSIS OF LARGE COMPRESSIBLE DEEP NEURAL NET-WORK

2019-12-30 | |  40 |   39 |   0

Abstract

One of the biggest issues in deep learning theory is the generalization ability of networks with huge model size. The classical learning theory suggests that overparameterized models cause overfitting. However, practically used large deep models avoid overfitting, which is not well explained by the classical approaches. To resolve this issue, several attempts have been made. Among them, the compression based bound is one of the promising approaches. However, the compression based bound can be applied only to a compressed network, and it is not applicable to the non-compressed original network. In this paper, we give a unified framework that can convert compression based bounds to those for non-compressed original networks. The bound gives even better rate than the one for the compressed network by improving the bias term. By establishing the unified framework, we can obtain a data dependent generalization error bound which gives a tighter evaluation than the data independent ones.

上一篇:HOW MUCH POSITION INFORMATION DO CONVOLU -TIONAL NEURAL NETWORKS ENCODE ?

下一篇:UNCERTAINTY- GUIDED CONTINUAL LEARNING WITHBAYESIAN NEURAL NETWORKS

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...