Search on the Replay Buffer: Bridging Planning and Reinforcement Learning

06/12/2019
by   Benjamin Eysenbach, et al.
5

The history of learning for control has been an exciting back and forth between two broad classes of algorithms: planning and reinforcement learning. Planning algorithms effectively reason over long horizons, but assume access to a local policy and distance metric over collision-free paths. Reinforcement learning excels at learning policies and the relative values of states, but fails to plan over long horizons. Despite the successes of each method in various domains, tasks that require reasoning over long horizons with limited feedback and high-dimensional observations remain exceedingly challenging for both planning and reinforcement learning algorithms. Frustratingly, these sorts of tasks are potentially the most useful, as they are simple to design (a human only need to provide an example goal state) and avoid reward shaping, which can bias the agent towards finding a sub-optimal solution. We introduce a general control algorithm that combines the strengths of planning and reinforcement learning to effectively solve these tasks. Our aim is to decompose the task of reaching a distant goal state into a sequence of easier tasks, each of which corresponds to reaching a subgoal. Planning algorithms can automatically find these waypoints, but only if provided with suitable abstractions of the environment -- namely, a graph consisting of nodes and edges. Our main insight is that this graph can be constructed via reinforcement learning, where a goal-conditioned value function provides edge weights, and nodes are taken to be previously seen observations in a replay buffer. Using graph search over our replay buffer, we can automatically generate this sequence of subgoals, even in image-based environments. Our algorithm, search on the replay buffer (SoRB), enables agents to solve sparse reward tasks over one hundred steps, and generalizes substantially better than standard RL algorithms.

READ FULL TEXT

page 2

page 6

research
10/22/2021

C-Planning: An Automatic Curriculum for Learning Goal-Reaching Tasks

Goal-conditioned reinforcement learning (RL) can solve tasks in a wide r...
research
10/18/2018

Fast deep reinforcement learning using online adjustments from the past

We propose Ephemeral Value Adjusments (EVA): a means of allowing deep re...
research
07/27/2020

Learning Compositional Neural Programs for Continuous Control

We propose a novel solution to challenging sparse-reward, continuous con...
research
09/17/2018

Muscle Excitation Estimation in Biomechanical Simulation Using NAF Reinforcement Learning

Motor control is a set of time-varying muscle excitations which generate...
research
12/08/2022

PALMER: Perception-Action Loop with Memory for Long-Horizon Planning

To achieve autonomy in a priori unknown real-world scenarios, agents sho...
research
01/19/2020

FRESH: Interactive Reward Shaping in High-Dimensional State Spaces using Human Feedback

Reinforcement learning has been successful in training autonomous agents...
research
03/13/2020

Sparse Graphical Memory for Robust Planning

To operate effectively in the real world, artificial agents must act fro...

Please sign up or login with your details

Forgot password? Click here to reset