资源论文Geometry of Neural Network Loss Surfaces via Random Matrix Theory

Geometry of Neural Network Loss Surfaces via Random Matrix Theory

2020-03-10 | |  96 |   43 |   0

Abstract

Understanding the geometry of neural network loss surfaces is important for the development of improved optimization algorithms and for building a theoretical understanding of why deep learning works. In this paper, we study the geometry in terms of the distribution of eigenvalues of the Hessian matrix at critical points of varying energy. We introduce an analytical framework and a set of tools from random matrix theory that allow us to compute an approximation of this distribution under a set of simplifying assumptions. The shape of the spectrum depends strongly on the energy and another key parameter, Φ, which measures the ratio of parameters to data points. Our analysis predicts and numerical simulations support that for critical points of small index, th number of negative eigenvalues scales like the 3/2 power of the energy. We leave as an open problem an explanation for our observation that, in the context of a certain memorization task, the energy of minimizers is well-approximated by the function 图片.png

上一篇:Multi-Class Optimal Margin Distribution Machine

下一篇:Tensor-Train Recurrent Neural Networks for Video Classification

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...