资源论文Non-Ergodic Alternating Proximal Augmented Lagrangian Algorithms with Optimal Rates

Non-Ergodic Alternating Proximal Augmented Lagrangian Algorithms with Optimal Rates

2020-02-17 | |  53 |   38 |   0

Abstract 

We develop two new non-ergodic alternating proximal augmented Lagrangian algorithms (NEAPAL) to solve a class of nonsmooth constrained convex optimization problems. Our approach relies on a novel combination of the augmented Lagrangian framework, alternating/linearization scheme, Nesterov’s acceleration techniques, and adaptive strategy for parameters. Our algorithms have several new features compared to existing methods. Firstly, they have a Nesterov’s acceleration step on the primal variables compared to the dual one in several methods in the literature. Secondly, they achieve non-ergodic optimal convergence rates under standard assumptions, i.e. an image.png rate without any smoothness or strong convexity-type assumption, or an image.png rate under only semi-strong convexity, where k is the iteration counter. Thirdly, they preserve or have better per-iteration complexity compared to existing algorithms. Fourthly, they can be implemented in a parallel fashion. Finally, all the parameters are adaptively updated without heuristic tuning. We verify our algorithms on different numerical examples and compare them with some state-of-the-art methods.

上一篇:Stochastic Primal-Dual Method for Empirical Risk Minimization with O(1) Per-Iteration Complexity

下一篇:Fast Rates of ERM and Stochastic Approximation: Adaptive to Error Bound Conditions

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...