Learning Nash Equilibria in Monotone Games

04/03/2019
by   Tatiana Tatarenko, et al.
0

We consider multi-agent decision making where each agent's cost function depends on all agents' strategies. We propose a distributed algorithm to learn a Nash equilibrium, whereby each agent uses only obtained values of her cost function at each joint played action, lacking any information of the functional form of her cost or other agents' costs or strategy sets. In contrast to past work where convergent algorithms required strong monotonicity, we prove algorithm convergence under mere monotonicity assumption. This significantly widens algorithm's applicability, such as to games with linear coupling constraints.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/14/2017

A distributed algorithm for average aggregative games with coupling constraints

We consider the framework of average aggregative games, where the cost f...
research
01/27/2022

Efficient Distributed Learning in Stochastic Non-cooperative Games without Information Exchange

In this work, we study stochastic non-cooperative games, where only nois...
research
07/27/2021

Gradient Play in n-Cluster Games with Zero-Order Information

We study a distributed approach for seeking a Nash equilibrium in n-clus...
research
10/01/2019

The Nash Equilibrium with Inertia in Population Games

In the traditional game-theoretic set up, where agents select actions an...
research
05/23/2023

Memory Asymmetry Creates Heteroclinic Orbits to Nash Equilibrium in Learning in Zero-Sum Games

Learning in games considers how multiple agents maximize their own rewar...
research
04/09/2018

Prior Independent Equilibria and Linear Multi-dimensional Bayesian Games

We show that a Bayesian strategy map profile is a Bayesian Nash Equilibr...
research
01/30/2022

Multimodal Maximum Entropy Dynamic Games

Environments with multi-agent interactions often result a rich set of mo...

Please sign up or login with your details

Forgot password? Click here to reset