Higher-Order Uncoupled Dynamics Do Not Lead to Nash Equilibrium – Except When They Do

04/09/2023
by   Sarah A. Toonsi, et al.
0

The framework of multi-agent learning explores the dynamics of how individual agent strategies evolve in response to the evolving strategies of other agents. Of particular interest is whether or not agent strategies converge to well known solution concepts such as Nash Equilibrium (NE). Most “fixed order” learning dynamics restrict an agent's underlying state to be its own strategy. In “higher order” learning, agent dynamics can include auxiliary states that can capture phenomena such as path dependencies. We introduce higher-order gradient play dynamics that resemble projected gradient ascent with auxiliary states. The dynamics are “payoff based” in that each agent's dynamics depend on its own evolving payoff. While these payoffs depend on the strategies of other agents in a game setting, agent dynamics do not depend explicitly on the nature of the game or the strategies of other agents. In this sense, dynamics are “uncoupled” since an agent's dynamics do not depend explicitly on the utility functions of other agents. We first show that for any specific game with an isolated completely mixed-strategy NE, there exist higher-order gradient play dynamics that lead (locally) to that NE, both for the specific game and nearby games with perturbed utility functions. Conversely, we show that for any higher-order gradient play dynamics, there exists a game with a unique isolated completely mixed-strategy NE for which the dynamics do not lead to NE. These results build on prior work that showed that uncoupled fixed-order learning cannot lead to NE in certain instances, whereas higher-order variants can. Finally, we consider the mixed-strategy equilibrium associated with coordination games. While higher-order gradient play can converge to such equilibria, we show such dynamics must be inherently internally unstable.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/08/2020

Fictitious play in zero-sum stochastic games

We present fictitious play dynamics for the general class of stochastic ...
research
12/29/2020

Knowledge-Based Strategies for Multi-Agent Teams Playing Against Nature

We study teams of agents that play against Nature towards achieving a co...
research
12/31/2018

Learning and Selfconfirming Equilibria in Network Games

Consider a set of agents who play a network game repeatedly. Agents may ...
research
05/27/2023

Reinforcement Learning With Reward Machines in Stochastic Games

We investigate multi-agent reinforcement learning for stochastic games w...
research
12/04/2020

Learning in two-player games between transparent opponents

We consider a scenario in which two reinforcement learning agents repeat...
research
01/28/2021

Equilibrium Learning in Combinatorial Auctions: Computing Approximate Bayesian Nash Equilibria via Pseudogradient Dynamics

Applications of combinatorial auctions (CA) as market mechanisms are pre...
research
03/08/2018

SA-IGA: A Multiagent Reinforcement Learning Method Towards Socially Optimal Outcomes

In multiagent environments, the capability of learning is important for ...

Please sign up or login with your details

Forgot password? Click here to reset