Regret Bounds for Information-Directed Reinforcement Learning

06/09/2022
by   Botao Hao, et al.
0

Information-directed sampling (IDS) has revealed its potential as a data-efficient algorithm for reinforcement learning (RL). However, theoretical understanding of IDS for Markov Decision Processes (MDPs) is still limited. We develop novel information-theoretic tools to bound the information ratio and cumulative information gain about the learning target. Our theoretical results shed light on the importance of choosing the learning target such that the practitioners can balance the computation and regret bounds. As a consequence, we derive prior-free Bayesian regret bounds for vanilla-IDS which learns the whole environment under tabular finite-horizon MDPs. In addition, we propose a computationally-efficient regularized-IDS that maximizes an additive form rather than the ratio form and show that it enjoys the same regret bound as vanilla-IDS. With the aid of rate-distortion theory, we improve the regret bound by learning a surrogate, less informative environment. Furthermore, we extend our analysis to linear MDPs and prove similar regret bounds for Thompson sampling as a by-product.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/21/2019

Information-Theoretic Confidence Bounds for Reinforcement Learning

We integrate information-theoretic concepts into the design and analysis...
research
01/28/2023

STEERING: Stein Information Directed Exploration for Model-Based Reinforcement Learning

Directed Exploration is a crucial challenge in reinforcement learning (R...
research
06/10/2021

Thompson Sampling with a Mixture Prior

We study Thompson sampling (TS) in online decision-making problems where...
research
09/25/2020

Mirror Descent and the Information Ratio

We establish a connection between the stability of mirror descent and th...
research
05/23/2022

Computationally Efficient Horizon-Free Reinforcement Learning for Linear Mixture MDPs

Recent studies have shown that episodic reinforcement learning (RL) is n...
research
06/04/2022

Deciding What to Model: Value-Equivalent Sampling for Reinforcement Learning

The quintessential model-based reinforcement-learning agent iteratively ...
research
02/25/2021

Improved Regret Bound and Experience Replay in Regularized Policy Iteration

In this work, we study algorithms for learning in infinite-horizon undis...

Please sign up or login with your details

Forgot password? Click here to reset