Context-Dependent Upper-Confidence Bounds for Directed Exploration

11/15/2018
by   Raksha Kumaraswamy, et al.
6

Directed exploration strategies for reinforcement learning are critical for learning an optimal policy in a minimal number of interactions with the environment. Many algorithms use optimism to direct exploration, either through visitation estimates or upper confidence bounds, as opposed to data-inefficient strategies like ϵ-greedy that use random, undirected exploration. Most data-efficient exploration methods require significant computation, typically relying on a learned model to guide exploration. Least-squares methods have the potential to provide some of the data-efficiency benefits of model-based approaches -- because they summarize past interactions -- with the computation closer to that of model-free approaches. In this work, we provide a novel, computationally efficient, incremental exploration strategy, leveraging this property of least-squares temporal difference learning (LSTD). We derive upper confidence bounds on the action-values learned by LSTD, with context-dependent (or state-dependent) noise variance. Such context-dependent noise focuses exploration on a subset of variable states, and allows for reduced exploration in other states. We empirically demonstrate that our algorithm can converge more quickly than other incremental exploration strategies using confidence estimates on action-values.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/05/2017

UCB Exploration via Q-Ensembles

We show how an ensemble of Q^*-functions can be leveraged for more effec...
research
12/18/2018

Information-Directed Exploration for Deep Reinforcement Learning

Efficient exploration remains a major challenge for reinforcement learni...
research
04/11/2018

DORA The Explorer: Directed Outreaching Reinforcement Action-Selection

Exploration is a fundamental aspect of Reinforcement Learning, typically...
research
10/29/2018

Model-Based Active Exploration

Efficient exploration is an unsolved problem in Reinforcement Learning. ...
research
10/01/2022

Deep Intrinsically Motivated Exploration in Continuous Control

In continuous control, exploration is often performed through undirected...
research
11/22/2021

Optimistic Temporal Difference Learning for 2048

Temporal difference (TD) learning and its variants, such as multistage T...
research
04/21/2023

On the Importance of Exploration for Real Life Learned Algorithms

The quality of data driven learning algorithms scales significantly with...

Please sign up or login with your details

Forgot password? Click here to reset