Discriminator-Guided Model-Based Offline Imitation Learning

07/01/2022
by   Wenjia Zhang, et al.
7

Offline imitation learning (IL) is a powerful method to solve decision-making problems from expert demonstrations without reward labels. Existing offline IL methods suffer from severe performance degeneration under limited expert data due to covariate shift. Including a learned dynamics model can potentially improve the state-action space coverage of expert data, however, it also faces challenging issues like model approximation/generalization errors and suboptimality of rollout data. In this paper, we propose the Discriminator-guided Model-based offline Imitation Learning (DMIL) framework, which introduces a discriminator to simultaneously distinguish the dynamics correctness and suboptimality of model rollout data against real expert demonstrations. DMIL adopts a novel cooperative-yet-adversarial learning strategy, which uses the discriminator to guide and couple the learning process of the policy and dynamics model, resulting in improved model performance and robustness. Our framework can also be extended to the case when demonstrations contain a large proportion of suboptimal data. Experimental results show that DMIL and its extension achieve superior performance and robustness compared to state-of-the-art offline IL methods under small datasets.

READ FULL TEXT

page 2

page 6

page 19

page 23

page 24

research
07/20/2022

Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations

We study the problem of offline Imitation Learning (IL) where an agent a...
research
05/29/2019

Adversarial Imitation Learning from Incomplete Demonstrations

Imitation learning targets deriving a mapping from states to actions, a....
research
06/06/2021

Mitigating Covariate Shift in Imitation Learning via Offline Data Without Great Coverage

This paper studies offline Imitation Learning (IL) where an agent learns...
research
09/09/2019

Expert-Level Atari Imitation Learning from Demonstrations Only

One of the key issues for imitation learning lies in making policy learn...
research
01/27/2023

Theoretical Analysis of Offline Imitation With Supplementary Dataset

Behavioral cloning (BC) can recover a good policy from abundant expert d...
research
06/28/2021

Expert Q-learning: Deep Q-learning With State Values From Expert Examples

We propose a novel algorithm named Expert Q-learning. Expert Q-learning ...
research
04/21/2023

Self-Supervised Adversarial Imitation Learning

Behavioural cloning is an imitation learning technique that teaches an a...

Please sign up or login with your details

Forgot password? Click here to reset