资源论文Dual Averaging and Proximal Gradient Descent for Online Alternating Direction Multiplier Method

Dual Averaging and Proximal Gradient Descent for Online Alternating Direction Multiplier Method

2020-03-02 | |  98 |   49 |   0

Abstract

We develop new stochastic optimization methods that are applicable to a wide range of structured regularizations. Basically our methods are combinations of basic stochastic optimization techniques and Alternating Direction Multiplier Method (ADMM). ADMM is a general framework for optimizing a composite function, and has a wide range of applications. We propose two types of online variants of ADMM, which correspond to online proximal gradient descent and regularized dual averaging respectively. The proposed algorithms are computationally efficient and easy to implement. Our methods yield 图片.png convergence of the expected risk. Moreover, the online proximal gradient descent type method yields O(log(T )/T ) convergence for a strongly convex loss. Numerical experiments show effectiveness of our methods in learning tasks with structured sparsity such as overlapped group lasso.

上一篇:That was fast! Speeding up NN search of high dimensional distributions.

下一篇:On the Statistical Consistency of Algorithms for Binary Classification under Class Imbalance

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...