A Critique of Strictly Batch Imitation Learning

10/05/2021
by   Gokul Swamy, et al.
0

Recent work by Jarrett et al. attempts to frame the problem of offline imitation learning (IL) as one of learning a joint energy-based model, with the hope of out-performing standard behavioral cloning. We suggest that notational issues obscure how the psuedo-state visitation distribution the authors propose to optimize might be disconnected from the policy's true state visitation distribution. We further construct natural examples where the parameter coupling advocated by Jarrett et al. leads to inconsistent estimates of the expert's policy, unlike behavioral cloning.

READ FULL TEXT

page 1

page 2

page 3

research
08/08/2020

Non-Adversarial Imitation Learning and its Connections to Adversarial Methods

Many modern methods for imitation learning and inverse reinforcement lea...
research
11/16/2019

On Value Discrepancy of Imitation Learning

Imitation learning trains a policy from expert demonstrations. Imitation...
research
06/25/2020

Strictly Batch Imitation Learning by Energy-based Distribution Matching

Consider learning a policy purely on the basis of demonstrated behavior—...
research
10/07/2020

Provable Hierarchical Imitation Learning via EM

Due to recent empirical successes, the options framework for hierarchica...
research
05/02/2020

An Imitation Game for Learning Semantic Parsers from User Interaction

Despite the widely successful applications, bootstrapping and fine-tunin...
research
11/08/2022

ABC: Adversarial Behavioral Cloning for Offline Mode-Seeking Imitation Learning

Given a dataset of expert agent interactions with an environment of inte...
research
06/18/2020

Reparameterized Variational Divergence Minimization for Stable Imitation

While recent state-of-the-art results for adversarial imitation-learning...

Please sign up or login with your details

Forgot password? Click here to reset