
Vortices Instead of Equilibria in MinMax Optimization: Chaos and Butterfly Effects of Online Learning in ZeroSum Games
We establish that algorithmic experiments in zerosum games "fail misera...
read it

Cycles in adversarial regularized learning
Regularized learning is a fundamental technique in online optimization, ...
read it

Selfsimilarity in the KeplerHeisenberg problem
The KeplerHeisenberg problem is that of determining the motion of a pla...
read it

Convergence of Learning Dynamics in Stackelberg Games
This paper investigates the convergence of learning dynamics in Stackelb...
read it

Dissipative SymODEN: Encoding Hamiltonian Dynamics with Dissipation and Control into Deep Learning
In this work, we introduce Dissipative SymODEN, a deep learning architec...
read it

Fast and Furious Learning in ZeroSum Games: Vanishing Regret with NonVanishing Step Sizes
We show for the first time, to our knowledge, that it is possible to rec...
read it

A Unified View of Largescale Zerosum Equilibrium Computation
The task of computing approximate Nash equilibria in large zerosum exte...
read it
MultiAgent Learning in Network ZeroSum Games is a Hamiltonian System
Zerosum games are natural, if informal, analogues of closed physical systems where no energy/utility can enter or exit. This analogy can be extended even further if we consider zerosum network (polymatrix) games where multiple agents interact in a closed economy. Typically, (network) zerosum games are studied from the perspective of Nash equilibria. Nevertheless, this comes in contrast with the way we typically think about closed physical systems, e.g., Earthmoon systems which move perpetually along recurrent trajectories of constant energy. We establish a formal and robust connection between multiagent systems and Hamiltonian dynamics  the same dynamics that describe conservative systems in physics. Specifically, we show that no matter the size, or network structure of such closed economies, even if agents use different online learning dynamics from the standard class of FollowtheRegularizedLeader, they yield Hamiltonian dynamics. This approach generalizes the known connection to Hamiltonians for the special case of replicator dynamics in two agent zerosum games developed by Hofbauer. Moreover, our results extend beyond zerosum settings and provide a type of a Rosetta stone (see e.g. Table 1) that helps to translate results and techniques between online optimization, convex analysis, games theory, and physics.
READ FULL TEXT
Comments
There are no comments yet.