Mitigating Covariate Shift in Imitation Learning via Offline Data Without Great Coverage

06/06/2021
by   Jonathan D. Chang, et al.
6

This paper studies offline Imitation Learning (IL) where an agent learns to imitate an expert demonstrator without additional online environment interactions. Instead, the learner is presented with a static offline dataset of state-action-next state transition triples from a potentially less proficient behavior policy. We introduce Model-based IL from Offline data (MILO): an algorithmic framework that utilizes the static dataset to solve the offline IL problem efficiently both in theory and in practice. In theory, even if the behavior policy is highly sub-optimal compared to the expert, we show that as long as the data from the behavior policy provides sufficient coverage on the expert state-action traces (and with no necessity for a global coverage over the entire state-action space), MILO can provably combat the covariate shift issue in IL. Complementing our theory results, we also demonstrate that a practical implementation of our approach mitigates covariate shift on benchmark MuJoCo continuous control tasks. We demonstrate that with behavior policies whose performances are less than half of that of the expert, MILO still successfully imitates with an extremely low number of expert state-action pairs while traditional offline IL method such as behavior cloning (BC) fails completely. Source code is provided at https://github.com/jdchang1/milo.

READ FULL TEXT

page 2

page 9

research
02/06/2023

DITTO: Offline Imitation Learning with World Models

We propose DITTO, an offline imitation learning algorithm which uses wor...
research
07/01/2022

Discriminator-Guided Model-Based Offline Imitation Learning

Offline imitation learning (IL) is a powerful method to solve decision-m...
research
02/28/2022

LobsDICE: Offline Imitation Learning from Observation via Stationary Distribution Correction Estimation

We consider the problem of imitation from observation (IfO), in which th...
research
05/06/2023

Replicating Complex Dialogue Policy of Humans via Offline Imitation Learning with Supervised Regularization

Policy learning (PL) is a module of a task-oriented dialogue system that...
research
02/04/2021

Feedback in Imitation Learning: The Three Regimes of Covariate Shift

Imitation learning practitioners have often noted that conditioning poli...
research
11/08/2022

ABC: Adversarial Behavioral Cloning for Offline Mode-Seeking Imitation Learning

Given a dataset of expert agent interactions with an environment of inte...
research
06/04/2023

Data Quality in Imitation Learning

In supervised learning, the question of data quality and curation has be...

Please sign up or login with your details

Forgot password? Click here to reset