Abstract
We consider the problem of minimizing a smooth convex function by reducing the optimization to computing the Nash equilibrium of a particular zero-sum convexconcave game. Zero-sum games can be solved using online learning dynamics, where a classical technique involves simulating two no-regret algorithms that play against each other and, after T rounds, the average iterate is guaranteed to solve the original optimization problem with error decaying as . In this paper we show that the technique can be enhanced to a rate of by extending recent work [22, 25] that leverages optimistic learning to speed up equilibrium computation. The resulting optimization algorithm derived from this analysis coincides exactly with the well-known N ESTEROVACCELERATION [16] method, and indeed the same story allows us to recover several variants of the Nesterov’s algorithm via small tweaks. We are also able to establish the accelerated linear rate for a function which is both strongly-convex and smooth. This methodology unifies a number of different iterative optimization methods: we show that the H EAVY BALL algorithm is precisely the non-optimistic variant of N ESTEROVACCELERATION, and recent prior work already established a similar perspective on F RANK W OLFE [2, 1].