Abstract
We consider the stochastic composition optimization problem proposed in [Wang et al., 2016a], which has applications ranging from estimation to statistical and machine learning. We propose the first ADMM-based algorithm named com-SVRADMM, and show that com-SVR-ADMM converges linearly for strongly convex and Lipschitz smooth objectives, and has a convergence rate of O(log S/S), which improves upon the O(S 4/9 ) rate in [Wang et al., 2016b] when the objective is convex and Lipschitz smooth. Moreover, p com-SVR-ADMM possesses a rate of O(1/ S) when the objective is convex but without Lipschitz smoothness. We also conduct experiments and show that it outperforms existing algorithms.