资源论文Depth-Width Tradeoffs in Approximating Natural Functions with Neural Networks

Depth-Width Tradeoffs in Approximating Natural Functions with Neural Networks

2020-03-09 | |  72 |   49 |   0

Abstract

We provide several new depth-based separation results for feed-forward neural networks, proving that various types of simple and natural functions can be better approximated using deeper networks than shallower ones, even if the shallower networks are much larger. This includes indicators of balls and ellipses; non-linear functions which are radial with respect to the L1 norm; and smooth non-linear functions. We also show that these gaps can be observed experimentally: Increasing the depth indeed allows better learning than increasing width, when training neural networks to learn an indicator of a unit ball.

上一篇:meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting

下一篇:Leveraging Union of Subspace Structure to Improve Constrained Clustering

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...