资源论文Gradient Descent Learns One-hidden-layer CNN: Don’t be Afraid of Spurious Local Minima

Gradient Descent Learns One-hidden-layer CNN: Don’t be Afraid of Spurious Local Minima

2020-03-16 | |  55 |   53 |   0

Abstract

We consider the problem of learning a one-hiddenlayer neural network with non-overlapping convolutional layer P and ReLU activation, i.e., f (Z, w, a) = 图片.png, in which both the convolutional weights w and the output weights a are parameters to be learned. When the labels are the outputs from a teacher network of the same architecture with fixed weights (图片.png ), we prove that with Gaussian input Z, there is a spurious local minimizer. Surprisingly, in the presence of the spurious local minimizer, gradient descent with weight normalization from randomly initialized weights can still be proven to recover the true parameters with constant probability, which can be boosted to probability 1 with multiple restarts. We also show that with constant probability, the same procedure could also converge to the spurious local minimum, showing that the local minimum plays a non-trivial role in the dynamics of gradient descent. Furthermore, a quantitative analysis shows that the gradient descent dynamics has two phases: it starts off slow, but converges much faster after several iterations.

上一篇:Bounds on the Approximation Power of Feedforward Neural Networks

下一篇:Proportional Allocation: Simple, Distributed, and Diverse Matching with High Entropy

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...