Variance-Based Rewards for Approximate Bayesian Reinforcement Learning

03/15/2012
by   Jonathan Sorg, et al.
0

The exploreexploit dilemma is one of the central challenges in Reinforcement Learning (RL). Bayesian RL solves the dilemma by providing the agent with information in the form of a prior distribution over environments; however, full Bayesian planning is intractable. Planning with the mean MDP is a common myopic approximation of Bayesian planning. We derive a novel reward bonus that is a function of the posterior distribution over environments, which, when added to the reward in planning with the mean MDP, results in an agent which explores efficiently and effectively. Although our method is similar to existing methods when given an uninformative or unstructured prior, unlike existing methods, our method can exploit structured priors. We prove that our method results in a polynomial sample complexity and empirically demonstrate its advantages in a structured exploration task.

READ FULL TEXT
research
10/19/2021

On Reward-Free RL with Kernel and Neural Function Approximations: Single-Agent MDP and Markov Game

To achieve sample efficiency in reinforcement learning (RL), it necessit...
research
03/13/2013

A Greedy Approximation of Bayesian Reinforcement Learning with Probably Optimistic Transition Model

Bayesian Reinforcement Learning (RL) is capable of not only incorporatin...
research
05/11/2020

TOMA: Topological Map Abstraction for Reinforcement Learning

Animals are able to discover the topological map (graph) of surrounding ...
research
07/12/2021

A Simple Reward-free Approach to Constrained Reinforcement Learning

In constrained reinforcement learning (RL), a learning agent seeks to no...
research
06/16/2020

Task-agnostic Exploration in Reinforcement Learning

Efficient exploration is one of the main challenges in reinforcement lea...
research
10/19/2016

A Reinforcement Learning Approach to the View Planning Problem

We present a Reinforcement Learning (RL) solution to the view planning p...
research
07/08/2021

Computational Benefits of Intermediate Rewards for Hierarchical Planning

Many hierarchical reinforcement learning (RL) applications have empirica...

Please sign up or login with your details

Forgot password? Click here to reset