DeepAI AI Chat
Log In Sign Up

A Closer Look at Advantage-Filtered Behavioral Cloning in High-Noise Datasets

10/10/2021
by   Jake Grigsby, et al.
University of Virginia
0

Recent Offline Reinforcement Learning methods have succeeded in learning high-performance policies from fixed datasets of experience. A particularly effective approach learns to first identify and then mimic optimal decision-making strategies. Our work evaluates this method's ability to scale to vast datasets consisting almost entirely of sub-optimal noise. A thorough investigation on a custom benchmark helps identify several key challenges involved in learning from high-noise datasets. We re-purpose prioritized experience sampling to locate expert-level demonstrations among millions of low-performance samples. This modification enables offline agents to learn state-of-the-art policies in benchmark tasks using datasets where expert actions are outnumbered nearly 65:1.

READ FULL TEXT
06/16/2020

Accelerating Online Reinforcement Learning with Offline Datasets

Reinforcement learning provides an appealing formalism for learning cont...
04/12/2022

When Should We Prefer Offline Reinforcement Learning Over Behavioral Cloning?

Offline reinforcement learning (RL) algorithms can acquire effective pol...
12/08/2022

Model-based trajectory stitching for improved behavioural cloning and its applications

Behavioural cloning (BC) is a commonly used imitation learning method to...
11/16/2021

Improving Learning from Demonstrations by Learning from Experience

How to make imitation learning more general when demonstrations are rela...
11/21/2022

Improving TD3-BC: Relaxed Policy Constraint for Offline Learning and Stable Online Fine-Tuning

The ability to discover optimal behaviour from fixed data sets has the p...
01/27/2023

Behaviour Discriminator: A Simple Data Filtering Method to Improve Offline Policy Learning

This paper studies the problem of learning a control policy without the ...