Fictitious Play Outperforms Counterfactual Regret Minimization

01/30/2020
by   Sam Ganzfried, et al.
0

We compare the performance of two popular iterative algorithms, fictitious play and counterfactual regret minimization, in approximating Nash equilibrium in multiplayer games. Despite recent success of counterfactual regret minimization in multiplayer poker and conjectures of its superiority, we show that fictitious play leads to improved Nash equilibrium approximation with statistical significance over a variety of game sizes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/09/2017

Regret Minimization in Behaviorally-Constrained Zero-Sum Games

No-regret learning has emerged as a powerful tool for solving extensive-...
research
08/06/2020

Solving imperfect-information games via exponential counterfactual regret minimization

Two agents' decision-making problems can be modeled as the game with two...
research
05/23/2018

On self-play computation of equilibrium in poker

We compare performance of the genetic algorithm and the counterfactual r...
research
01/29/2023

Recommender system as an exploration coordinator: a bounded O(1) regret algorithm for large platforms

On typical modern platforms, users are only able to try a small fraction...
research
12/30/2021

From Behavioral Theories to Econometrics: Inferring Preferences of Human Agents from Data on Repeated Interactions

We consider the problem of estimating preferences of human agents from d...
research
07/02/2018

Analysis and Optimization of Deep CounterfactualValue Networks

Recently a strong poker-playing algorithm called DeepStack was published...
research
09/12/2016

Reduced Space and Faster Convergence in Imperfect-Information Games via Regret-Based Pruning

Counterfactual Regret Minimization (CFR) is the most popular iterative a...

Please sign up or login with your details

Forgot password? Click here to reset