Learning in POMDPs with Monte Carlo Tree Search

06/14/2018
by   Sammie Katt, et al.
0

The POMDP is a powerful framework for reasoning under outcome and information uncertainty, but constructing an accurate POMDP model is difficult. Bayes-Adaptive Partially Observable Markov Decision Processes (BA-POMDPs) extend POMDPs to allow the model to be learned during execution. BA-POMDPs are a Bayesian RL approach that, in principle, allows for an optimal trade-off between exploitation and exploration. Unfortunately, BA-POMDPs are currently impractical to solve for any non-trivial domain. In this paper, we extend the Monte-Carlo Tree Search method POMCP to BA-POMDPs and show that the resulting method, which we call BA-POMCP, is able to tackle problems that previous solution methods have been unable to solve. Additionally, we introduce several techniques that exploit the BA-POMDP structure to improve the efficiency of BA-POMCP along with proof of their convergence.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/14/2018

Bayesian Reinforcement Learning in Factored POMDPs

Bayesian approaches provide a principled solution to the exploration-exp...
research
02/17/2022

BADDr: Bayes-Adaptive Deep Dropout RL for POMDPs

While reinforcement learning (RL) has made great advances in scalability...
research
07/01/2020

Convex Regularization in Monte-Carlo Tree Search

Monte-Carlo planning and Reinforcement Learning (RL) are essential to se...
research
09/19/2023

Monte-Carlo tree search with uncertainty propagation via optimal transport

This paper introduces a novel backup strategy for Monte-Carlo Tree Searc...
research
03/15/2012

Bayesian Inference in Monte-Carlo Tree Search

Monte-Carlo Tree Search (MCTS) methods are drawing great interest after ...
research
02/24/2023

Towards Computationally Efficient Responsibility Attribution in Decentralized Partially Observable MDPs

Responsibility attribution is a key concept of accountable multi-agent d...
research
10/20/2011

A Version of Geiringer-like Theorem for Decision Making in the Environments with Randomness and Incomplete Information

Purpose: In recent years Monte-Carlo sampling methods, such as Monte Car...

Please sign up or login with your details

Forgot password? Click here to reset