-
A mean-field analysis of two-player zero-sum games
Finding Nash equilibria in two-player zero-sum continuous games is a cen...
read it
-
Example of a finite game with no Berge equilibria at all
The problem of the existence of Berge equilibria in the sense of Zhukovs...
read it
-
Calibration of Shared Equilibria in General Sum Partially Observable Markov Games
Training multi-agent systems (MAS) to achieve realistic equilibria gives...
read it
-
On Existence, Mixtures, Computation and Efficiency in Multi-objective Games
In a multi-objective game, each individual's payoff is a vector-valued f...
read it
-
GANGs: Generative Adversarial Network Games
Generative Adversarial Networks (GAN) have become one of the most succes...
read it
-
Finding Mixed Nash Equilibria of Generative Adversarial Networks
We reconsider the training objective of Generative Adversarial Networks ...
read it
-
Multiagent trajectory models via game theory and implicit layer-based learning
For prediction of interacting agents' trajectories, we propose an end-to...
read it
Minimax Theorem for Latent Games or: How I Learned to Stop Worrying about Mixed-Nash and Love Neural Nets
Adversarial training, a special case of multi-objective optimization, is an increasingly useful tool in machine learning. For example, two-player zero-sum games are important for generative modeling (GANs) and for mastering games like Go or Poker via self-play. A classic result in Game Theory states that one must mix strategies, as pure equilibria may not exist. Surprisingly, machine learning practitioners typically train a single pair of agents – instead of a pair of mixtures – going against Nash's principle. Our main contribution is a notion of limited-capacity-equilibrium for which, as capacity grows, optimal agents – not mixtures – can learn increasingly expressive and realistic behaviors. We define latent games, a new class of game where agents are mappings that transform latent distributions. Examples include generators in GANs, which transform Gaussian noise into distributions on images, and StarCraft II agents, which transform sampled build orders into policies. We show that minimax equilibria in latent games can be approximated by a single pair of dense neural networks. Finally, we apply our latent game approach to solve differentiable Blotto, a game with an infinite strategy space.
READ FULL TEXT
Comments
There are no comments yet.