Acceleration through Optimistic No-Regret Dynamics
We consider the problem of minimizing a smooth convex function by reducing the optimization to computing the Nash equilibrium of a particular zero-sum convex-concave game. Zero-sum games can be solved using no-regret learning dynamics, and the standard approach leads to a rate of O(1/T). But we are able to show that the game can be solved at a rate of O(1/T^2), extending recent works of RS13,SALS15 by using optimistic learning to speed up equilibrium computation. The optimization algorithm that we can extract from this equilibrium reduction coincides exactly with the well-known N83a method, and indeed the same story allows us to recover several variants of the Nesterov's algorithm via small tweaks. This methodology unifies a number of different iterative optimization methods: we show that the algorithm is precisely the non-optimistic variant of , and recent prior work already established a similar perspective on AW17,ALLW18.
READ FULL TEXT