Minimax Theorem for Latent Games or: How I Learned to Stop Worrying about Mixed-Nash and Love Neural Nets

02/14/2020
by   Gauthier Gidel, et al.
0

Adversarial training, a special case of multi-objective optimization, is an increasingly useful tool in machine learning. For example, two-player zero-sum games are important for generative modeling (GANs) and for mastering games like Go or Poker via self-play. A classic result in Game Theory states that one must mix strategies, as pure equilibria may not exist. Surprisingly, machine learning practitioners typically train a single pair of agents – instead of a pair of mixtures – going against Nash's principle. Our main contribution is a notion of limited-capacity-equilibrium for which, as capacity grows, optimal agents – not mixtures – can learn increasingly expressive and realistic behaviors. We define latent games, a new class of game where agents are mappings that transform latent distributions. Examples include generators in GANs, which transform Gaussian noise into distributions on images, and StarCraft II agents, which transform sampled build orders into policies. We show that minimax equilibria in latent games can be approximated by a single pair of dense neural networks. Finally, we apply our latent game approach to solve differentiable Blotto, a game with an infinite strategy space.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/14/2020

A mean-field analysis of two-player zero-sum games

Finding Nash equilibria in two-player zero-sum continuous games is a cen...
07/16/2018

Example of a finite game with no Berge equilibria at all

The problem of the existence of Berge equilibria in the sense of Zhukovs...
07/20/2022

PPAD-Complete Pure Approximate Nash Equilibria in Lipschitz Games

Lipschitz games, in which there is a limit λ (the Lipschitz value of the...
06/23/2020

Calibration of Shared Equilibria in General Sum Partially Observable Markov Games

Training multi-agent systems (MAS) to achieve realistic equilibria gives...
10/23/2018

Finding Mixed Nash Equilibria of Generative Adversarial Networks

We reconsider the training objective of Generative Adversarial Networks ...
11/17/2021

Preference Communication in Multi-Objective Normal-Form Games

We study the problem of multiple agents learning concurrently in a multi...
06/30/2022

A note on large deviations for interacting particle dynamics for finding mixed equilibria in zero-sum games

Finding equilibria points in continuous minimax games has become a key p...