资源论文Learning ReLUs via Gradient Descent Mahdi Soltanolkotabi

Learning ReLUs via Gradient Descent Mahdi Soltanolkotabi

2020-02-10 | |  56 |   58 |   0

Abstract 

In this paper we study the problem of learning Rectified Linear Units (ReLUs) which are functions of the form image.png max(0, image.png) with image.png denoting the weight vector. We study this problem in the high-dimensional regime where the number of observations are fewer than the dimension of the weight vector. We assume that the weight vector belongs to some closed set (convex or nonconvex) which captures known side-information about its structure. We focus on the realizable model where the inputs are chosen i.i.d. from a Gaussian distribution and the labels are generated according to a planted weight vector. We show that projected gradient descent, when initialized at 0, converges at a linear rate to the planted model with a number of samples that is optimal up to numerical constants. Our results on the dynamics of convergence of these very shallow neural nets may provide some insights towards understanding the dynamics of deeper architectures.

上一篇:On Structured Prediction Theory with Calibrated Convex Surrogate Losses

下一篇:Group Additive Structure Identification for Kernel Nonparametric Regression

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...