资源论文Sharp Minima Can Generalize For Deep Nets

Sharp Minima Can Generalize For Deep Nets

2020-03-09 | |  57 |   40 |   0

Abstract

Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize rel atively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research One standing hypothesis that is gaining popularity e.g. Hochreiter & Schmidhuber (1997); Keskar et al. (2017), is that the flatness of minima of t loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of p rameter space induced by the inherent symmetries that these architectures exhibit to build equivale models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.

上一篇:Adaptive Neural Networks for Efficient Inference

下一篇:Conditional Image Synthesis with Auxiliary Classifier GANs

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...