-
An Explicit Convergence Rate for Nesterov's Method from SDP
The framework of Integral Quadratic Constraints (IQC) introduced by Less...
read it
-
Tight last-iterate convergence rates for no-regret learning in multi-player games
We study the question of obtaining last-iterate convergence rates for no...
read it
-
A Tight and Unified Analysis of Extragradient for a Whole Spectrum of Differentiable Games
We consider differentiable games: multi-objective minimization problems,...
read it
-
Accelerating Smooth Games by Manipulating Spectral Shapes
We use matrix iteration theory to characterize acceleration in smooth ga...
read it
-
Optimal Complexity and Certification of Bregman First-Order Methods
We provide a lower bound showing that the O(1/k) convergence rate of the...
read it
-
Semistochastic Quadratic Bound Methods
Partition functions arise in a variety of settings, including conditiona...
read it
-
Average-case Acceleration for Bilinear Games and Normal Matrices
Advances in generative modeling and adversarial learning have given rise...
read it
A Unified Analysis of First-Order Methods for Smooth Games via Integral Quadratic Constraints
The theory of integral quadratic constraints (IQCs) allows the certification of exponential convergence of interconnected systems containing nonlinear or uncertain elements. In this work, we adapt the IQC theory to study first-order methods for smooth and strongly-monotone games and show how to design tailored quadratic constraints to get tight upper bounds of convergence rates. Using this framework, we recover the existing bound for the gradient method (GD), derive sharper bounds for the proximal point method (PPM) and optimistic gradient method (OG), and provide for the first time a global convergence rate for the negative momentum method (NM) with an iteration complexity (κ^1.5), which matches its known lower bound. In addition, for time-varying systems, we prove that the gradient method with optimal step size achieves the fastest provable worst-case convergence rate with quadratic Lyapunov functions. Finally, we further extend our analysis to stochastic games and study the impact of multiplicative noise on different algorithms. We show that it is impossible for an algorithm with one step of memory to achieve acceleration if it only queries the gradient once per batch (in contrast with the stochastic strongly-convex optimization setting, where such acceleration has been demonstrated). However, we exhibit an algorithm which achieves acceleration with two gradient queries per batch.
READ FULL TEXT
Comments
There are no comments yet.