资源论文Global Convergence of Stochastic Gradient Descent for Some Non-convex Matrix Problems

Global Convergence of Stochastic Gradient Descent for Some Non-convex Matrix Problems

2020-03-05 | |  68 |   48 |   0

Abstract

Stochastic gradient descent (SGD) on a low-rank factorization (Burer & Monteiro, 2003) is commonly employed to speed up matrix problems including matrix completion, subspace tracking, and SDP relaxation. In this paper, we exhibit a step size scheme for SGD on a low-rank least-squares problem, and we prove that, under broad sampling conditions, our method converges globally from a random starting point within 图片.png steps with constant probability for constant-rank problems. Our modification of SGD relates it to stochastic power iteration. We also show experiments to illustrate the runtime and convergence of the algorithm.

上一篇:An embarrassingly simple approach to zero-shot learning

下一篇:Budget Allocation Problem with Multiple Advertisers: A Game Theoretic View

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...