Jump-Start Reinforcement Learning

04/05/2022
by   Ikechukwu Uchendu, et al.
0

Reinforcement learning (RL) provides a theoretical framework for continuously improving an agent's behavior via trial and error. However, efficiently learning policies from scratch can be very difficult, particularly for tasks with exploration challenges. In such settings, it might be desirable to initialize RL with an existing policy, offline data, or demonstrations. However, naively performing such initialization in RL often works poorly, especially for value-based methods. In this paper, we present a meta algorithm that can use offline data, demonstrations, or a pre-existing policy to initialize an RL policy, and is compatible with any RL approach. In particular, we propose Jump-Start Reinforcement Learning (JSRL), an algorithm that employs two policies to solve tasks: a guide-policy, and an exploration-policy. By using the guide-policy to form a curriculum of starting states for the exploration-policy, we are able to efficiently improve performance on a set of simulated robotic tasks. We show via experiments that JSRL is able to significantly outperform existing imitation and reinforcement learning algorithms, particularly in the small-data regime. In addition, we provide an upper bound on the sample complexity of JSRL and show that with the help of a guide-policy, one can improve the sample complexity for non-optimism exploration methods from exponential in horizon to polynomial.

READ FULL TEXT

page 2

page 7

page 8

research
12/31/2021

Importance of Empirical Sample Complexity Analysis for Offline Reinforcement Learning

We hypothesize that empirically studying the sample complexity of offlin...
research
10/15/2022

A Policy-Guided Imitation Approach for Offline Reinforcement Learning

Offline reinforcement learning (RL) methods can generally be categorized...
research
11/07/2018

Policy Certificates: Towards Accountable Reinforcement Learning

The performance of a reinforcement learning algorithm can vary drastical...
research
07/12/2019

Learning Self-Correctable Policies and Value Functions from Demonstrations with Negative Sampling

Imitation learning, followed by reinforcement learning algorithms, is a ...
research
06/16/2018

Scheduled Policy Optimization for Natural Language Communication with Intelligent Agents

We investigate the task of learning to follow natural language instructi...
research
05/29/2018

Observe and Look Further: Achieving Consistent Performance on Atari

Despite significant advances in the field of deep Reinforcement Learning...
research
04/06/2020

Weakly-Supervised Reinforcement Learning for Controllable Behavior

Reinforcement learning (RL) is a powerful framework for learning to take...

Please sign up or login with your details

Forgot password? Click here to reset