Variance Reduction in Monte Carlo Counterfactual Regret Minimization (VR-MCCFR) for Extensive Form Games using Baselines

09/09/2018
by   Martin Schmid, et al.
12

Learning strategies for imperfect information games from samples of interaction is a challenging problem. A common method for this setting, Monte Carlo Counterfactual Regret Minimization (MCCFR), can have slow long-term convergence rates due to high variance. In this paper, we introduce a variance reduction technique (VR-MCCFR) that applies to any sampling variant of MCCFR. Using this technique, per-iteration estimated values and updates are reformulated as a function of sampled values and state-action baselines, similar to their use in policy gradient reinforcement learning. The new formulation allows estimates to be bootstrapped from other estimates within the same episode, propagating the benefits of baselines along the sampled trajectory; the estimates remain unbiased even when bootstrapping from other estimates. Finally, we show that given a perfect baseline, the variance of the value estimates can be reduced to zero. Experimental evaluation shows that VR-MCCFR brings an order of magnitude speedup, while the empirical variance decreases by three orders of magnitude. The decreased variance allows for the first time CFR+ to be used with sampling, increasing the speedup to two orders of magnitude.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/22/2019

Low-Variance and Zero-Variance Baselines for Extensive-Form Games

Extensive-form games (EFGs) are a common model of multi-agent interactio...
research
09/04/2023

Pure Monte Carlo Counterfactual Regret Minimization

Counterfactual Regret Minimization (CFR) and its variants are the best a...
research
02/19/2020

Stochastic Regret Minimization in Extensive-Form Games

Monte-Carlo counterfactual regret minimization (MCCFR) is the state-of-t...
research
07/03/2018

Solving Atari Games Using Fractals And Entropy

In this paper, we introduce a novel MCTS based approach that is derived ...
research
12/20/2016

AIVAT: A New Variance Reduction Technique for Agent Evaluation in Imperfect Information Games

Evaluating agent performance when outcomes are stochastic and agents use...
research
06/08/2022

ESCHER: Eschewing Importance Sampling in Games by Computing a History Value Function to Estimate Regret

Recent techniques for approximating Nash equilibria in very large games ...
research
05/22/2017

Reducing Reparameterization Gradient Variance

Optimization with noisy gradients has become ubiquitous in statistics an...

Please sign up or login with your details

Forgot password? Click here to reset