When Should We Prefer Offline Reinforcement Learning Over Behavioral Cloning?

04/12/2022
by   Aviral Kumar, et al.
0

Offline reinforcement learning (RL) algorithms can acquire effective policies by utilizing previously collected experience, without any online interaction. It is widely understood that offline RL is able to extract good policies even from highly suboptimal data, a scenario where imitation learning finds suboptimal solutions that do not improve over the demonstrator that generated the dataset. However, another common use case for practitioners is to learn from data that resembles demonstrations. In this case, one can choose to apply offline RL, but can also use behavioral cloning (BC) algorithms, which mimic a subset of the dataset via supervised learning. Therefore, it seems natural to ask: when can an offline RL method outperform BC with an equal amount of expert data, even when BC is a natural choice? To answer this question, we characterize the properties of environments that allow offline RL methods to perform better than BC methods, even when only provided with expert data. Additionally, we show that policies trained on sufficiently noisy suboptimal data can attain better performance than even BC algorithms with expert data, especially on long-horizon problems. We validate our theoretical results via extensive experiments on both diagnostic and high-dimensional domains including robotic manipulation, maze navigation, and Atari games, with a variety of data distributions. We observe that, under specific but common conditions such as sparse rewards or noisy data sources, modern offline RL methods can significantly outperform BC.

READ FULL TEXT

page 9

page 33

research
07/20/2022

Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations

We study the problem of offline Imitation Learning (IL) where an agent a...
research
01/30/2023

Winning Solution of Real Robot Challenge III

This report introduces our winning solution of the real-robot phase of t...
research
09/22/2021

A Workflow for Offline Model-Free Robotic Reinforcement Learning

Offline reinforcement learning (RL) enables learning control policies by...
research
10/10/2021

A Closer Look at Advantage-Filtered Behavioral Cloning in High-Noise Datasets

Recent Offline Reinforcement Learning methods have succeeded in learning...
research
05/23/2023

Sequence Modeling is a Robust Contender for Offline Reinforcement Learning

Offline reinforcement learning (RL) allows agents to learn effective, re...
research
10/21/2022

Implicit Offline Reinforcement Learning via Supervised Learning

Offline Reinforcement Learning (RL) via Supervised Learning is a simple ...
research
10/16/2020

Learning Dexterous Manipulation from Suboptimal Experts

Learning dexterous manipulation in high-dimensional state-action spaces ...

Please sign up or login with your details

Forgot password? Click here to reset