资源论文Stochastic Gradient Descent with Only One Projection

Stochastic Gradient Descent with Only One Projection

2020-01-13 | |  60 |   47 |   0

Abstract

Although many variants of stochastic gradient descent have been proposed for large-scale convex optimization, most of them require projecting the solution at each iteration to ensure that the obtained solution stays within the feasible domain. For complex domains (e.g., positive semidefinite cone), the projection step can be computationally expensive, making stochastic gradient descent unattractive for large-scale optimization problems. We address this limitation by developing novel stochastic optimization algorithms that do not need intermediate projections. Instead, only one projection at the last iteration is needed to obtain a feasible solution in the given domain. Our theoretical analysis shows that with a high probability, the proposed algorithms achieve an 图片.png  convergence rate for general convex optimization, and an 图片.png rate for strongly convex optimization under mild conditions about the domain and the objective function.

上一篇:Structured Learning of Gaussian Graphical Models

下一篇:Perfect Dimensionality Recovery by Variational Bayesian PCA

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...