资源论文Stochastic Primal-Dual Method for Empirical Risk Minimization with O(1) Per-Iteration Complexity

Stochastic Primal-Dual Method for Empirical Risk Minimization with O(1) Per-Iteration Complexity

2020-02-17 | |  61 |   37 |   0

Abstract 

Regularized empirical risk minimization problem with linear predictor appears frequently in machine learning. In this paper, we propose a new stochastic primaldual method to solve this class of problems. Different from existing methods, our proposed methods only require O(1) operations in each iteration. We also develop a variance-reduction variant of the algorithm that converges linearly. Numerical experiments suggest that our methods are faster than existing ones such as proximal SGD, SVRG and SAGA on high-dimensional problems.

上一篇:Factored Bandits

下一篇:Non-Ergodic Alternating Proximal Augmented Lagrangian Algorithms with Optimal Rates

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Joint Pose and Ex...

    Facial expression recognition (FER) is a challe...