A Unified Perspective on Value Backup and Exploration in Monte-Carlo Tree Search

02/11/2022
by   Tuan Dam, et al.
0

Monte-Carlo Tree Search (MCTS) is a class of methods for solving complex decision-making problems through the synergy of Monte-Carlo planning and Reinforcement Learning (RL). The highly combinatorial nature of the problems commonly addressed by MCTS requires the use of efficient exploration strategies for navigating the planning tree and quickly convergent value backup methods. These crucial problems are particularly evident in recent advances that combine MCTS with deep neural networks for function approximation. In this work, we propose two methods for improving the convergence rate and exploration based on a newly introduced backup operator and entropy regularization. We provide strong theoretical guarantees to bound convergence rate, approximation error, and regret of our methods. Moreover, we introduce a mathematical framework based on the use of the α-divergence for backup and exploration in MCTS. We show that this theoretical formulation unifies different approaches, including our newly introduced ones, under the same mathematical framework, allowing to obtain different methods by simply changing the value of α. In practice, our unified perspective offers a flexible way to balance between exploration and exploitation by tuning the single α parameter according to the problem at hand. We validate our methods through a rigorous empirical study from basic toy problems to the complex Atari games, and including both MDP and POMDP problems.

READ FULL TEXT

page 23

page 24

page 28

research
07/01/2020

Convex Regularization in Monte-Carlo Tree Search

Monte-Carlo planning and Reinforcement Learning (RL) are essential to se...
research
03/25/2021

Active Tree Search in Large POMDPs

Model-based planning and prospection are widely studied in both cognitiv...
research
03/08/2021

Monte Carlo Tree Search: A Review of Recent Modifications and Applications

Monte Carlo Tree Search (MCTS) is a powerful approach to designing game-...
research
05/29/2023

Provable and Practical: Efficient Exploration in Reinforcement Learning via Langevin Monte Carlo

We present a scalable and effective exploration strategy based on Thomps...
research
09/06/2018

How to Combine Tree-Search Methods in Reinforcement Learning

Finite-horizon lookahead policies are abundantly used in Reinforcement L...
research
02/11/2020

Static and Dynamic Values of Computation in MCTS

Monte-Carlo Tree Search (MCTS) is one of the most-widely used methods fo...
research
02/17/2022

BADDr: Bayes-Adaptive Deep Dropout RL for POMDPs

While reinforcement learning (RL) has made great advances in scalability...

Please sign up or login with your details

Forgot password? Click here to reset