Principled Exploration via Optimistic Bootstrapping and Backward Induction

05/13/2021
by   Chenjia Bai, et al.
3

One principled approach for provably efficient exploration is incorporating the upper confidence bound (UCB) into the value function as a bonus. However, UCB is specified to deal with linear and tabular settings and is incompatible with Deep Reinforcement Learning (DRL). In this paper, we propose a principled exploration method for DRL through Optimistic Bootstrapping and Backward Induction (OB2I). OB2I constructs a general-purpose UCB-bonus through non-parametric bootstrap in DRL. The UCB-bonus estimates the epistemic uncertainty of state-action pairs for optimistic exploration. We build theoretical connections between the proposed UCB-bonus and the LSVI-UCB in a linear setting. We propagate future uncertainty in a time-consistent manner through episodic backward update, which exploits the theoretical advantage and empirically improves the sample-efficiency. Our experiments in the MNIST maze and Atari suite suggest that OB2I outperforms several state-of-the-art exploration approaches.

READ FULL TEXT

page 7

page 16

page 19

page 20

research
09/02/2018

Effective Exploration for Deep Reinforcement Learning via Bootstrapped Q-Ensembles under Tsallis Entropy Regularization

Recently deep reinforcement learning (DRL) has achieved outstanding succ...
research
09/14/2021

Exploration in Deep Reinforcement Learning: A Comprehensive Survey

Deep Reinforcement Learning (DRL) and Deep Multi-agent Reinforcement Lea...
research
06/05/2017

UCB Exploration via Q-Ensembles

We show how an ensemble of Q^*-functions can be leveraged for more effec...
research
03/04/2023

Wasserstein Actor-Critic: Directed Exploration via Optimism for Continuous-Actions Control

Uncertainty quantification has been extensively used as a means to achie...
research
08/28/2022

Normality-Guided Distributional Reinforcement Learning for Continuous Control

Learning a predictive model of the mean return, or value function, plays...
research
06/02/2022

Incorporating Explicit Uncertainty Estimates into Deep Offline Reinforcement Learning

Most theoretically motivated work in the offline reinforcement learning ...
research
10/20/2021

Dynamic Bottleneck for Robust Self-Supervised Exploration

Exploration methods based on pseudo-count of transitions or curiosity of...

Please sign up or login with your details

Forgot password? Click here to reset