Making Efficient Use of Demonstrations to Solve Hard Exploration Problems

09/03/2019
by   Tom Le Paine, et al.
10

This paper introduces R2D3, an agent that makes efficient use of demonstrations to solve hard exploration problems in partially observable environments with highly variable initial conditions. We also introduce a suite of eight tasks that combine these three properties, and show that R2D3 can solve several of the tasks where other state of the art methods (both with and without demonstrations) fail to see even a single successful trajectory after tens of billions of steps of exploration.

READ FULL TEXT

page 4

page 5

page 16

page 17

page 18

page 22

07/07/2020

Guided Exploration with Proximal Policy Optimization using a Single Demonstration

Solving sparse reward tasks through exploration is one of the major chal...
06/16/2022

BYOL-Explore: Exploration by Bootstrapped Prediction

We present BYOL-Explore, a conceptually simple yet general approach for ...
02/18/2021

Learning Memory-Dependent Continuous Control from Demonstrations

Efficient exploration has presented a long-standing challenge in reinfor...
11/16/2021

Improving Learning from Demonstrations by Learning from Experience

How to make imitation learning more general when demonstrations are rela...
01/30/2019

Go-Explore: a New Approach for Hard-Exploration Problems

A grand challenge in reinforcement learning is intelligent exploration, ...
09/29/2020

Align-RUDDER: Learning From Few Demonstrations by Reward Redistribution

Reinforcement Learning algorithms require a large number of samples to s...
09/13/2018

Imitating Human Search Strategies for Assembly

We present a Learning from Demonstration method for teaching robots to p...