资源论文Faster and Non-ergodic O(1/K) Stochastic Alternating Direction Method of Multipliers

Faster and Non-ergodic O(1/K) Stochastic Alternating Direction Method of Multipliers

2020-02-10 | |  47 |   38 |   0

Abstract 

We study stochastic convex optimization subjected to linear equality constraints. Traditional Stochastic Alternating Direction Method of Multipliers  [1] and its Nesterov’s acceleration scheme [2] can only achieve ergodic image.png convergence rates, where K is the number of iteration. By introducing Variance Reduction (VR) techniques, the convergence rates improve to ergodic O(1/K) [3, 4]. In this paper, we propose a new stochastic ADMM which elaborately integrates Nesterov’s extrapolation and VR techniques. With Nesterov’s extrapolation, our algorithm can achieve a non-ergodic O(1/K) convergence rate which is optimal for separable linearly constrained non-smooth convex problems,  while the convergence rates of VR based ADMM methods are actually tight image.png in non-ergodic sense. To the best of our knowledge, this is the first work that achieves a truly accelerated, stochastic convergence rate for constrained convex problems. The experimental results demonstrate that our algorithm is faster than the existing state-of-the-art stochastic ADMM methods.

上一篇:Learning Overcomplete HMMs

下一篇:From Bayesian Sparsity to Gated Recurrent Nets

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...