资源论文Generalization Properties and Implicit Regularization for Multiple Passes SGM

Generalization Properties and Implicit Regularization for Multiple Passes SGM

2020-03-06 | |  62 |   41 |   0

Abstract

We study the generalization properties of stochastic gradient methods for learning with convex loss functions and linearly parameterized functions. We show that, in the absence of penalizations or constraints, the stability and approximation properties of the algorithm can be controlled by tuning either the step-size or the number of passes over the data. In this view, these parameters can be seen to control a form of implicit regularization. Numerical results complement the theoretical findings.

上一篇:Recovery guarantee of weighted low-rank approximation via alternating minimization

下一篇:Computationally Efficient Nystro?m Approximation using Fast Transforms

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...