资源论文SIZE -FREE GENERALIZATION BOUNDSFOR CONVOLUTIONAL NEURAL NETWORKS

SIZE -FREE GENERALIZATION BOUNDSFOR CONVOLUTIONAL NEURAL NETWORKS

2019-12-30 | |  65 |   40 |   0

Abstract

We prove bounds on the generalization error of convolutional networks. The bounds are in terms of the training loss, the number of parameters, the Lipschitz constant of the loss and the distance from the weights to the initial weights. They are independent of the number of pixels in the input, and the height and width of hidden feature maps. We present experiments with CIFAR-10, along with varying hyperparameters of a deep convolutional network, comparing our bounds with practical generalization gaps.

上一篇:PROVABLE FILTER PRUNING FOR EFFICIENT NEURAL N ETWORKS

下一篇:PROX -SGD: TRAINING STRUCTURED NEURAL NET-WORKS UNDER REGULARIZATION AND CONSTRAINTS

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...