
Vortices Instead of Equilibria in MinMax Optimization: Chaos and Butterfly Effects of Online Learning in ZeroSum Games
We establish that algorithmic experiments in zerosum games "fail misera...
read it

Cycles in adversarial regularized learning
Regularized learning is a fundamental technique in online optimization, ...
read it

Selfsimilarity in the KeplerHeisenberg problem
The KeplerHeisenberg problem is that of determining the motion of a pla...
read it

PoincaréBendixson Limit Sets in MultiAgent Learning
A key challenge of evolutionary game theory and multiagent learning is ...
read it

Newtonbased Policy Optimization for Games
Many learning problems involve multiple agents optimizing different inte...
read it

Dissipative SymODEN: Encoding Hamiltonian Dynamics with Dissipation and Control into Deep Learning
In this work, we introduce Dissipative SymODEN, a deep learning architec...
read it

Openended Learning in Symmetric Zerosum Games
Zerosum games such as chess and poker are, abstractly, functions that e...
read it
MultiAgent Learning in Network ZeroSum Games is a Hamiltonian System
Zerosum games are natural, if informal, analogues of closed physical systems where no energy/utility can enter or exit. This analogy can be extended even further if we consider zerosum network (polymatrix) games where multiple agents interact in a closed economy. Typically, (network) zerosum games are studied from the perspective of Nash equilibria. Nevertheless, this comes in contrast with the way we typically think about closed physical systems, e.g., Earthmoon systems which move perpetually along recurrent trajectories of constant energy. We establish a formal and robust connection between multiagent systems and Hamiltonian dynamics  the same dynamics that describe conservative systems in physics. Specifically, we show that no matter the size, or network structure of such closed economies, even if agents use different online learning dynamics from the standard class of FollowtheRegularizedLeader, they yield Hamiltonian dynamics. This approach generalizes the known connection to Hamiltonians for the special case of replicator dynamics in two agent zerosum games developed by Hofbauer. Moreover, our results extend beyond zerosum settings and provide a type of a Rosetta stone (see e.g. Table 1) that helps to translate results and techniques between online optimization, convex analysis, games theory, and physics.
READ FULL TEXT
Comments
There are no comments yet.