资源论文Fast Stochastic Alternating Direction Method of Multipliers

Fast Stochastic Alternating Direction Method of Multipliers

2020-03-04 | |  72 |   34 |   0

Abstract

We propose a new stochastic alternating direction method of multipliers (ADMM) algorithm, which incrementally approximates the full gradient in the linearized ADMM formulation. Besides having a low per-iteration complexity as existing stochastic ADMM algorithms, it improves the convergence rate on convex problems from 图片.png to O(1/T ), where T is the number of iterations. This matches the convergence rate of the batch ADMM algorithm, but without the need to visit all the samples in each iteration. Experiments on the graph-guided fused lasso demonstrate that the new algorithm is significantly faster than state-of-the-art stochastic and batch ADMM algorithms.

上一篇:Globally Convergent Parallel MAP LP Relaxation Solver using the Frank-Wolfe Algorithm

下一篇:Pitfalls in the use of Parallel Inference for the Dirichlet Process

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...