资源论文Uniform Convergence of Gradients for Non-Convex Learning and Optimization

Uniform Convergence of Gradients for Non-Convex Learning and Optimization

2020-02-13 | |  64 |   48 |   0

Abstract

 We investigate 1) the rate at which refined properties of the empirical risk—in particular, gradients—converge to their population counterparts in standard nonconvex learning tasks, and 2) the consequences of this convergence for optimization. Our analysis follows the tradition of norm-based capacity control. We propose vector-valued Rademacher complexities as a simple, composable, and user-friendly tool to derive dimension-free uniform convergence bounds for gradients in nonconvex learning problems. As an application of our techniques, we give a new analysis of batch gradient descent methods for non-convex generalized linear models and non-convex robust regression, showing how to use any algorithm that finds approximate stationary points to obtain optimal sample complexity, even when dimension is high or possibly infinite and multiple passes over the dataset are allowed. Moving to non-smooth models we show—-in contrast to the smooth case—that even for a single ReLU it is not possible to obtain dimension-independent convergence rates for gradients in the worst case. On the positive side, it is still possible to obtain dimension-independent rates under a new type of distributional assumption.

上一篇:Optimization of Smooth Functions with Noisy Observations: Local Minimax Rates

下一篇:Lifted Weighted Mini-Bucket

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...