资源论文Adding One Neuron Can Eliminate All Bad Local Minima

Adding One Neuron Can Eliminate All Bad Local Minima

2020-02-17 | |  39 |   31 |   0

Abstract 

One of the main difficulties in analyzing neural networks is the non-convexity of the loss function which may have many bad local minima. In this paper, we study the landscape of neural networks for binary classification tasks. Under mild assumptions, we prove that after adding one special neuron with a skip connection to the output, or one special neuron per layer, every local minimum is a global minimum.

上一篇:HOGWILD!-Gibbs Can Be PanAccurate

下一篇:Understanding Regularized Spectral Clustering via Graph Conductance

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...