资源论文Stochastic optimization and sparse statistical recovery: Optimal algorithms for high dimensions

Stochastic optimization and sparse statistical recovery: Optimal algorithms for high dimensions

2020-01-13 | |  55 |   44 |   0

Abstract

We develop and analyze stochastic optimization algorithms for problems in which the expected loss is strongly convex, and the optimum is (approximately) sparse. Previous approaches are able to exploit only one of these two structures, yielding a 图片.png convergence rate for strongly convex objectives in d dimensions and 图片.png convergence rate when the optimum is s-sparse. Our algorithm is based on successively solving a series of 图片.png -regularized optimization problems using Nesterov’s dual averaging algorithm. We establish that the error of our solution after T iterations is at most 图片.png with natural extensions to approximate sparsity. Our results apply to locally Lipschitz losses including the logistic, exponential, hinge and least-squares losses. By recourse to statistical minimax results, we show that our convergence rates are optimal up to constants. The effectiveness of our approach is also confirmed in numerical simulations where we compare to several baselines on a least-squares regression problem.

上一篇:Imitation Learning by Coaching

下一篇:Mirror Descent Meets Fixed Share (and feels no regret)

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...