资源论文A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets

A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets

2020-01-13 | |  52 |   48 |   0

Abstract

We propose a new stochastic gradient method for optimizing the sum of a finite set of smooth functions, where the sum is strongly convex. While standard stochastic gradient methods converge at sublinear rates for this problem, the proposed method incorporates a memory of previous gradient values in order to achieve a linear convergence rate. In a machine learning context, numerical experiments indicate that the new algorithm can dramatically outperform standard algorithms, both in terms of optimizing the training error and reducing the test error quickly.

上一篇:Max-Margin Structured Output Regression for Spatio-Temporal Action Localization

下一篇:Controlled Recognition Bounds for Visual Learning and Exploration

用户评价
全部评价

热门资源

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...