Equilibrium Finding in Normal-Form Games Via Greedy Regret Minimization

04/11/2022
by   Hugh Zhang, et al.
0

We extend the classic regret minimization framework for approximating equilibria in normal-form games by greedily weighing iterates based on regrets observed at runtime. Theoretically, our method retains all previous convergence rate guarantees. Empirically, experiments on large randomly generated games and normal-form subgames of the AI benchmark Diplomacy show that greedy weights outperforms previous methods whenever sampling is used, sometimes by several orders of magnitude.

READ FULL TEXT

page 11

page 14

page 19

research
06/27/2021

Last-iterate Convergence in Extensive-Form Games

Regret-based algorithms are highly efficient at finding approximate Nash...
research
06/19/2022

The Power of Regularization in Solving Extensive-Form Games

In this paper, we investigate the power of regularization, a common tech...
research
07/28/2022

Regret Minimization and Convergence to Equilibria in General-sum Markov Games

An abundance of recent impossibility results establish that regret minim...
research
03/02/2023

Learning not to Regret

Regret minimization is a key component of many algorithms for finding Na...
research
05/30/2022

Efficient Φ-Regret Minimization in Extensive-Form Games via Online Mirror Descent

A conceptually appealing approach for learning Extensive-Form Games (EFG...
research
02/08/2020

Distance-based Equilibria in Normal-Form Games

We propose a simple uncertainty modification for the agent model in norm...
research
02/01/2022

Kernelized Multiplicative Weights for 0/1-Polyhedral Games: Bridging the Gap Between Learning in Extensive-Form and Normal-Form Games

While extensive-form games (EFGs) can be converted into normal-form game...

Please sign up or login with your details

Forgot password? Click here to reset