Problem Dependent Reinforcement Learning Bounds Which Can Identify Bandit Structure in MDPs

11/03/2019
by   Andrea Zanette, et al.
0

In order to make good decision under uncertainty an agent must learn from observations. To do so, two of the most common frameworks are Contextual Bandits and Markov Decision Processes (MDPs). In this paper, we study whether there exist algorithms for the more general framework (MDP) which automatically provide the best performance bounds for the specific problem at hand without user intervention and without modifying the algorithm. In particular, it is found that a very minor variant of a recently proposed reinforcement learning algorithm for MDPs already matches the best possible regret bound Õ (√(SAT)) in the dominant term if deployed on a tabular Contextual Bandit problem despite the agent being agnostic to such setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/01/2020

Minimax Optimal Reinforcement Learning for Discounted MDPs

We study the reinforcement learning problem for discounted Markov Decisi...
research
06/09/2009

Feature Reinforcement Learning: Part I: Unstructured MDPs

General-purpose, intelligent, learning agents cycle through sequences of...
research
06/27/2021

Regret Analysis in Deterministic Reinforcement Learning

We consider Markov Decision Processes (MDPs) with deterministic transiti...
research
11/15/2017

Markov Decision Processes with Continuous Side Information

We consider a reinforcement learning (RL) setting in which the agent int...
research
07/18/2022

An Information-Theoretic Analysis of Bayesian Reinforcement Learning

Building on the framework introduced by Xu and Raginksy [1] for supervis...
research
09/16/2021

Comparison and Unification of Three Regularization Methods in Batch Reinforcement Learning

In batch reinforcement learning, there can be poorly explored state-acti...
research
02/04/2023

Reinforcement Learning with History-Dependent Dynamic Contexts

We introduce Dynamic Contextual Markov Decision Processes (DCMDPs), a no...

Please sign up or login with your details

Forgot password? Click here to reset