Abstract
We consider training over-parameterized two-layer neural networks with Rectified Linear Unit (ReLU) using gradient descent (GD) method. Inspired by a recent line of work, we study the evolutions of network prediction errors across GD iterations, which can be neatly described in a matrix form. When the network is sufficiently over-parameterized, these matrices individually approximate an integral operator which is determined by the feature vector distribution
only. Consequently, GD method can be viewed as approximately applying the powers of this integral operator on the underlying function
that generates the responses. We show that if
admits a low-rank approximation with respect to the eigenspaces of this integral operator, then the empirical risk decreases to this low-rank approximation error at a linear rate which is determined by
and
only, i.e., the rate is independent of the sample size n. Furthermore, if
has zero low-rank approximation error, then, as long as thep width of the neural network is
the empirical risk decreases to
To the best of our knowledge, this is the first result showing the sufficiency of nearly-linear network over-parameterization. We provide an application of our general results to the setting where
is the uniform distribution on the spheres and
is a polynomial. Throughout this paper, we consider the scenario where the input dimension d is fixed.