
A meanfield analysis of twoplayer zerosum games
Finding Nash equilibria in twoplayer zerosum continuous games is a cen...
read it

Example of a finite game with no Berge equilibria at all
The problem of the existence of Berge equilibria in the sense of Zhukovs...
read it

Calibration of Shared Equilibria in General Sum Partially Observable Markov Games
Training multiagent systems (MAS) to achieve realistic equilibria gives...
read it

On Existence, Mixtures, Computation and Efficiency in Multiobjective Games
In a multiobjective game, each individual's payoff is a vectorvalued f...
read it

GANGs: Generative Adversarial Network Games
Generative Adversarial Networks (GAN) have become one of the most succes...
read it

Finding Mixed Nash Equilibria of Generative Adversarial Networks
We reconsider the training objective of Generative Adversarial Networks ...
read it

Multiagent trajectory models via game theory and implicit layerbased learning
For prediction of interacting agents' trajectories, we propose an endto...
read it
Minimax Theorem for Latent Games or: How I Learned to Stop Worrying about MixedNash and Love Neural Nets
Adversarial training, a special case of multiobjective optimization, is an increasingly useful tool in machine learning. For example, twoplayer zerosum games are important for generative modeling (GANs) and for mastering games like Go or Poker via selfplay. A classic result in Game Theory states that one must mix strategies, as pure equilibria may not exist. Surprisingly, machine learning practitioners typically train a single pair of agents – instead of a pair of mixtures – going against Nash's principle. Our main contribution is a notion of limitedcapacityequilibrium for which, as capacity grows, optimal agents – not mixtures – can learn increasingly expressive and realistic behaviors. We define latent games, a new class of game where agents are mappings that transform latent distributions. Examples include generators in GANs, which transform Gaussian noise into distributions on images, and StarCraft II agents, which transform sampled build orders into policies. We show that minimax equilibria in latent games can be approximated by a single pair of dense neural networks. Finally, we apply our latent game approach to solve differentiable Blotto, a game with an infinite strategy space.
READ FULL TEXT
Comments
There are no comments yet.