Differentiable Game Mechanics

05/13/2019
by   Alistair Letcher, et al.
0

Deep learning is built on the foundational guarantee that gradient descent on an objective function converges to local minima. Unfortunately, this guarantee fails in settings, such as generative adversarial nets, that exhibit multiple interacting losses. The behavior of gradient-based methods in games is not well understood -- and is becoming increasingly important as adversarial and multi-objective architectures proliferate. In this paper, we develop new tools to understand and control the dynamics in n-player differentiable games. The key result is to decompose the game Jacobian into two components. The first, symmetric component, is related to potential games, which reduce to gradient descent on an implicit function. The second, antisymmetric component, relates to Hamiltonian games, a new class of games that obey a conservation law akin to conservation laws in classical mechanical systems. The decomposition motivates Symplectic Gradient Adjustment (SGA), a new algorithm for finding stable fixed points in differentiable games. Basic experiments show SGA is competitive with recently proposed algorithms for finding stable fixed points in GANs -- while at the same time being applicable to, and having guarantees in, much more general cases.

READ FULL TEXT
research
02/15/2018

The Mechanics of n-Player Differentiable Games

The cornerstone underpinning deep learning is the guarantee that gradien...
research
09/08/2021

Constants of Motion: The Antidote to Chaos in Optimization and Game Dynamics

Several recent works in online optimization and game dynamics have estab...
research
11/16/2021

Polymatrix Competitive Gradient Descent

Many economic games and machine learning approaches can be cast as compe...
research
07/11/2023

Implicit regularisation in stochastic gradient descent: from single-objective to two-player games

Recent years have seen many insights on deep learning optimisation being...
research
11/20/2018

Stable Opponent Shaping in Differentiable Games

A growing number of learning methods are actually games which optimise m...
research
11/10/2021

Training Generative Adversarial Networks with Adaptive Composite Gradient

The wide applications of Generative adversarial networks benefit from th...
research
05/26/2020

On the Impossibility of Global Convergence in Multi-Loss Optimization

Under mild regularity conditions, gradient-based methods converge global...

Please sign up or login with your details

Forgot password? Click here to reset