资源论文Acceleration through Optimistic No-Regret Dynamics

Acceleration through Optimistic No-Regret Dynamics

2020-02-13 | |  110 |   52 |   0

Abstract

 We consider the problem of minimizing a smooth convex function by reducing the optimization to computing the Nash equilibrium of a particular zero-sum convexconcave game. Zero-sum games can be solved using online learning dynamics, where a classical technique involves simulating two no-regret algorithms that play against each other and, after T rounds, the average iterate is guaranteed to solve the original optimization problem with error decaying as image.png. In this paper we show that the technique can be enhanced to a rate of image.pngby extending recent work [22, 25] that leverages optimistic learning to speed up equilibrium computation. The resulting optimization algorithm derived from this analysis coincides exactly with the well-known N ESTEROVACCELERATION [16] method, and indeed the same story allows us to recover several variants of the Nesterov’s algorithm via small tweaks. We are also able to establish the accelerated linear rate for a function which is both strongly-convex and smooth. This methodology unifies a number of different iterative optimization methods: we show that the H EAVY BALL algorithm is precisely the non-optimistic variant of N ESTEROVACCELERATION, and recent prior work already established a similar perspective on F RANK W OLFE [2, 1].

上一篇:Sequence-to-Segments Networks for Segment Detection

下一篇:Sample Efficient Stochastic Gradient Iterative Hard Thresholding Method for Stochastic Sparse Linear Regression with Limited Attribute Observation

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...