Integrating Behavior Cloning and Reinforcement Learning for Improved Performance in Sparse Reward Environments

10/09/2019
by   Vinicius G. Goecks, et al.
0

This paper investigates how to efficiently transition and update policies, trained initially with demonstrations, using off-policy actor-critic reinforcement learning. It is well-known that techniques based on Learning from Demonstrations, for example behavior cloning, can lead to proficient policies given limited data. However, it is currently unclear how to efficiently update that policy using reinforcement learning as these approaches are inherently optimizing different objective functions. Previous works have used loss functions which combine behavioral cloning losses with reinforcement learning losses to enable this update, however, the components of these loss functions are often set anecdotally, and their individual contributions are not well understood. In this work we propose the Cycle-of-Learning (CoL) framework that uses an actor-critic architecture with a loss function that combines behavior cloning and 1-step Q-learning losses with an off-policy pre-training step from human demonstrations. This enables transition from behavior cloning to reinforcement learning without performance degradation and improves reinforcement learning in terms of overall performance and training time. Additionally, we carefully study the composition of these combined losses and their impact on overall policy learning. We show that our approach outperforms state-of-the-art techniques for combining behavior cloning and reinforcement learning for both dense and sparse reward scenarios. Our results also suggest that directly including the behavior cloning loss on demonstration data helps to ensure stable learning and ground future policy updates.

READ FULL TEXT
research
02/14/2018

Reinforcement Learning from Imperfect Demonstrations

Robust real-world learning should benefit from both demonstrations and i...
research
02/05/2022

Adversarially Trained Actor Critic for Offline Reinforcement Learning

We propose Adversarially Trained Actor Critic (ATAC), a new model-free a...
research
04/17/2017

Pseudorehearsal in actor-critic agents

Catastrophic forgetting has a serious impact in reinforcement learning, ...
research
05/07/2020

Curious Hierarchical Actor-Critic Reinforcement Learning

Hierarchical abstraction and curiosity-driven exploration are two common...
research
09/04/2020

Visualizing the Loss Landscape of Actor Critic Methods with Applications in Inventory Optimization

Continuous control is a widely applicable area of reinforcement learning...
research
04/03/2019

Jointly Pre-training with Supervised, Autoencoder, and Value Losses for Deep Reinforcement Learning

Deep Reinforcement Learning (DRL) algorithms are known to be data ineffi...
research
09/26/2022

Improving Document Image Understanding with Reinforcement Finetuning

Successful Artificial Intelligence systems often require numerous labele...

Please sign up or login with your details

Forgot password? Click here to reset