Simple Regret Optimization in Online Planning for Markov Decision Processes

06/15/2012
by   Zohar Feldman, et al.
0

We consider online planning in Markov decision processes (MDPs). In online planning, the agent focuses on its current state only, deliberates about the set of possible policies from that state onwards and, when interrupted, uses the outcome of that exploratory deliberation to choose what action to perform next. The performance of algorithms for online planning is assessed in terms of simple regret, which is the agent's expected performance loss when the chosen action, rather than an optimal one, is followed. To date, state-of-the-art algorithms for online planning in general MDPs are either best effort, or guarantee only polynomial-rate reduction of simple regret over time. Here we introduce a new Monte-Carlo tree search algorithm, BRUE, that guarantees exponential-rate reduction of simple regret and error probability. This algorithm is based on a simple yet non-standard state-space sampling scheme, MCTS2e, in which different parts of each sample are dedicated to different exploratory objectives. Our empirical evaluation shows that BRUE not only provides superior performance guarantees, but is also very effective in practice and favorably compares to state-of-the-art. We then extend BRUE with a variant of "learning by forgetting." The resulting set of algorithms, BRUE(alpha), generalizes BRUE, improves the exponential factor in the upper bound on its reduction rate, and exhibits even more attractive empirical performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/19/2019

Online Convex Optimization in Adversarial Markov Decision Processes

We consider online learning in episodic loop-free Markov decision proces...
research
05/25/2019

Large Scale Markov Decision Processes with Changing Rewards

We consider Markov Decision Processes (MDPs) where the rewards are unkno...
research
08/26/2022

Dynamic Regret of Online Markov Decision Processes

We investigate online Markov Decision Processes (MDPs) with adversariall...
research
01/31/2023

Reducing Blackwell and Average Optimality to Discounted MDPs via the Blackwell Discount Factor

We introduce the Blackwell discount factor for Markov Decision Processes...
research
03/25/2017

Exploration--Exploitation in MDPs with Options

While a large body of empirical results show that temporally-extended ac...
research
09/26/2013

Monte-Carlo Planning: Theoretically Fast Convergence Meets Practical Efficiency

Popular Monte-Carlo tree search (MCTS) algorithms for online planning, s...
research
09/04/2018

Vulcan: A Monte Carlo Algorithm for Large Chance Constrained MDPs with Risk Bounding Functions

Chance Constrained Markov Decision Processes maximize reward subject to ...

Please sign up or login with your details

Forgot password? Click here to reset