Model-Based Active Exploration

10/29/2018
by   Pranav Shyam, et al.
1

Efficient exploration is an unsolved problem in Reinforcement Learning. We introduce Model-Based Active eXploration (MAX), an algorithm that actively explores the environment. It minimizes data required to comprehensively model the environment by planning to observe novel events, instead of merely reacting to novelty encountered by chance. Non-stationarity induced by traditional exploration bonus techniques is avoided by constructing fresh exploration policies only at time of action. In semi-random toy environments where directed exploration is critical to make progress, our algorithm is at least an order of magnitude more efficient than strong baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/24/2020

Planning with Exploration: Addressing Dynamics Bottleneck in Model-based Reinforcement Learning

Model-based reinforcement learning is a framework in which an agent lear...
research
09/28/2020

Novelty Search in representational space for sample efficient exploration

We present a new approach for efficient exploration which leverages a lo...
research
02/07/2019

Deeper & Sparser Exploration

We address the problem of efficient exploration by proposing a new meta ...
research
12/10/2018

Improving Model-Based Control and Active Exploration with Reconstruction Uncertainty Optimization

Model based predictions of future trajectories of a dynamical system oft...
research
09/05/2017

Active Exploration for Learning Symbolic Representations

We introduce an online active exploration algorithm for data-efficiently...
research
07/02/2023

Active Sensing with Predictive Coding and Uncertainty Minimization

We present an end-to-end procedure for embodied exploration based on two...
research
11/15/2018

Context-Dependent Upper-Confidence Bounds for Directed Exploration

Directed exploration strategies for reinforcement learning are critical ...

Please sign up or login with your details

Forgot password? Click here to reset