资源论文Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized Loss Minimization

Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized Loss Minimization

2020-03-04 | |  54 |   40 |   0

Abstract

We introduce a proximal version of the stochastic dual coordinate ascent method and show how to accelerate the method using an inner-outer iteration procedure. We analyze the runtime of the framework and obtain rates that improve stateof-the-art results for various key machine learning optimization problems including SVM, logistic regression, ridge regression, Lasso, and multiclass SVM. Experiments validate our theoretical findings.

上一篇:Exponential Family Matrix Completion under Structural Constraints

下一篇:Global graph kernels using geometric embeddings

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...