-
Vortices Instead of Equilibria in MinMax Optimization: Chaos and Butterfly Effects of Online Learning in Zero-Sum Games
We establish that algorithmic experiments in zero-sum games "fail misera...
read it
-
Cycles in adversarial regularized learning
Regularized learning is a fundamental technique in online optimization, ...
read it
-
Self-similarity in the Kepler-Heisenberg problem
The Kepler-Heisenberg problem is that of determining the motion of a pla...
read it
-
Poincaré-Bendixson Limit Sets in Multi-Agent Learning
A key challenge of evolutionary game theory and multi-agent learning is ...
read it
-
Newton-based Policy Optimization for Games
Many learning problems involve multiple agents optimizing different inte...
read it
-
Dissipative SymODEN: Encoding Hamiltonian Dynamics with Dissipation and Control into Deep Learning
In this work, we introduce Dissipative SymODEN, a deep learning architec...
read it
-
Open-ended Learning in Symmetric Zero-sum Games
Zero-sum games such as chess and poker are, abstractly, functions that e...
read it
Multi-Agent Learning in Network Zero-Sum Games is a Hamiltonian System
Zero-sum games are natural, if informal, analogues of closed physical systems where no energy/utility can enter or exit. This analogy can be extended even further if we consider zero-sum network (polymatrix) games where multiple agents interact in a closed economy. Typically, (network) zero-sum games are studied from the perspective of Nash equilibria. Nevertheless, this comes in contrast with the way we typically think about closed physical systems, e.g., Earth-moon systems which move perpetually along recurrent trajectories of constant energy. We establish a formal and robust connection between multi-agent systems and Hamiltonian dynamics -- the same dynamics that describe conservative systems in physics. Specifically, we show that no matter the size, or network structure of such closed economies, even if agents use different online learning dynamics from the standard class of Follow-the-Regularized-Leader, they yield Hamiltonian dynamics. This approach generalizes the known connection to Hamiltonians for the special case of replicator dynamics in two agent zero-sum games developed by Hofbauer. Moreover, our results extend beyond zero-sum settings and provide a type of a Rosetta stone (see e.g. Table 1) that helps to translate results and techniques between online optimization, convex analysis, games theory, and physics.
READ FULL TEXT
Comments
There are no comments yet.