Expert Selection in High-Dimensional Markov Decision Processes

by   Vicenc Rubies Royo, et al.

In this work we present a multi-armed bandit framework for online expert selection in Markov decision processes and demonstrate its use in high-dimensional settings. Our method takes a set of candidate expert policies and switches between them to rapidly identify the best performing expert using a variant of the classical upper confidence bound algorithm, thus ensuring low regret in the overall performance of the system. This is useful in applications where several expert policies may be available, and one needs to be selected at run-time for the underlying environment.


page 1

page 5


Robust Asymmetric Learning in POMDPs

Policies for partially observed Markov decision processes can be efficie...

On Online Learning in Kernelized Markov Decision Processes

We develop algorithms with low regret for learning episodic Markov decis...

VOI-aware MCTS

UCT, a state-of-the art algorithm for Monte Carlo tree search (MCTS) in ...

MCTS Based on Simple Regret

UCT, a state-of-the art algorithm for Monte Carlo tree search (MCTS) in ...

Active Exploration in Markov Decision Processes

We introduce the active exploration problem in Markov decision processes...

Contextual Markov Decision Processes using Generalized Linear Models

We consider the recently proposed reinforcement learning (RL) framework ...

Machine Self-Confidence in Autonomous Systems via Meta-Analysis of Decision Processes

Algorithmic assurances from advanced autonomous systems assist human use...

Please sign up or login with your details

Forgot password? Click here to reset