资源论文Non-strongly-convex smooth stochastic approximation with convergence rate O(1/n)

Non-strongly-convex smooth stochastic approximation with convergence rate O(1/n)

2020-01-16 | |  71 |   45 |   0

Abstract

We consider the stochastic approximation problem where a convex function has to be minimized, given only the knowledge of unbiased estimates of its gradients at certain points, a framework which includes machine learning methods based on the minimization of the empirical risk. We focus on problems without strong convexity, for which all previously ? known algorithms achieve a convergence rate for function values of 图片.png after n iterations. We consider and analyze two algorithms that achieve a rate of 图片.png for classical supervised learning problems. For least-squares regression, we show that averaged stochastic gradient descent with constant step-size achieves the desired rate. For logistic regression, this is achieved by a simple novel stochastic gradient algorithm that (a) constructs successive local quadratic approximations of the loss functions, while (b) preserving the same running-time complexity as stochastic gradient descent. For these algorithms, we provide a non-asymptotic analysis of the generalization error (in expectation, and also in high probability for least-squares), and run extensive experiments showing that they often outperform existing approaches.

上一篇:Modeling Clutter Perception using Parametric Proto-object Partitioning

下一篇:Statistical analysis of coupled time series with Kernel Cross-Spectral Density operators.

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Joint Pose and Ex...

    Facial expression recognition (FER) is a challe...