资源论文fast stochastic variance reduced admm for stochastic composition optimization

fast stochastic variance reduced admm for stochastic composition optimization

2019-10-31 | |  54 |   36 |   0
Abstract We consider the stochastic composition optimization problem proposed in [Wang et al., 2016a], which has applications ranging from estimation to statistical and machine learning. We propose the first ADMM-based algorithm named com-SVRADMM, and show that com-SVR-ADMM converges linearly for strongly convex and Lipschitz smooth objectives, and has a convergence rate of O(log S/S), which improves upon the O(S 4/9 ) rate in [Wang et al., 2016b] when the objective is convex and Lipschitz smooth. Moreover, p com-SVR-ADMM possesses a rate of O(1/ S) when the objective is convex but without Lipschitz smoothness. We also conduct experiments and show that it outperforms existing algorithms.

上一篇:angle principal component analysis

下一篇:lmpp a large margin point process combining reinforcement and competition for modeling hashtag popularity

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...