TRAIL: Near-Optimal Imitation Learning with Suboptimal Data

10/27/2021
by   Mengjiao Yang, et al.
0

The aim in imitation learning is to learn effective policies by utilizing near-optimal expert demonstrations. However, high-quality demonstrations from human experts can be expensive to obtain in large numbers. On the other hand, it is often much easier to obtain large quantities of suboptimal or task-agnostic trajectories, which are not useful for direct imitation, but can nevertheless provide insight into the dynamical structure of the environment, showing what could be done in the environment even if not what should be done. We ask the question, is it possible to utilize such suboptimal offline datasets to facilitate provably improved downstream imitation learning? In this work, we answer this question affirmatively and present training objectives that use offline datasets to learn a factored transition model whose structure enables the extraction of a latent action space. Our theoretical analysis shows that the learned latent action space can boost the sample-efficiency of downstream imitation learning, effectively reducing the need for large near-optimal expert datasets through the use of auxiliary non-expert data. To learn the latent action space in practice, we propose TRAIL (Transition-Reparametrized Actions for Imitation Learning), an algorithm that learns an energy-based transition model contrastively, and uses the transition model to reparametrize the action space for sample-efficient imitation learning. We evaluate the practicality of our objective through experiments on a set of navigation and locomotion tasks. Our results verify the benefits suggested by our theory and show that TRAIL is able to improve baseline imitation learning by up to 4x in performance.

READ FULL TEXT
research
06/11/2022

Model-based Offline Imitation Learning with Non-expert Data

Although Behavioral Cloning (BC) in theory suffers compounding errors, i...
research
08/31/2018

Imitation Learning for Neural Morphological String Transduction

We employ imitation learning to train a neural transition-based string t...
research
06/04/2023

Data Quality in Imitation Learning

In supervised learning, the question of data quality and curation has be...
research
06/28/2021

Expert Q-learning: Deep Q-learning With State Values From Expert Examples

We propose a novel algorithm named Expert Q-learning. Expert Q-learning ...
research
05/26/2023

Inverse Dynamics Pretraining Learns Good Representations for Multitask Imitation

In recent years, domains such as natural language processing and image r...
research
04/26/2022

Learning Value Functions from Undirected State-only Experience

This paper tackles the problem of learning value functions from undirect...
research
02/28/2022

LobsDICE: Offline Imitation Learning from Observation via Stationary Distribution Correction Estimation

We consider the problem of imitation from observation (IfO), in which th...

Please sign up or login with your details

Forgot password? Click here to reset