A Divergence Minimization Perspective on Imitation Learning Methods

11/06/2019
by   Seyed Kamyar Seyed Ghasemipour, et al.
15

In many settings, it is desirable to learn decision-making and control policies through learning or bootstrapping from expert demonstrations. The most common approaches under this Imitation Learning (IL) framework are Behavioural Cloning (BC), and Inverse Reinforcement Learning (IRL). Recent methods for IRL have demonstrated the capacity to learn effective policies with access to a very limited set of demonstrations, a scenario in which BC methods often fail. Unfortunately, due to multiple factors of variation, directly comparing these methods does not provide adequate intuition for understanding this difference in performance. In this work, we present a unified probabilistic perspective on IL algorithms based on divergence minimization. We present f-MAX, an f-divergence generalization of AIRL [Fu et al., 2018], a state-of-the-art IRL method. f-MAX enables us to relate prior IRL methods such as GAIL [Ho Ermon, 2016] and AIRL [Fu et al., 2018], and understand their algorithmic properties. Through the lens of divergence minimization we tease apart the differences between BC and successful IRL approaches, and empirically evaluate these nuances on simulated high-dimensional continuous control domains. Our findings conclusively identify that IRL's state-marginal matching objective contributes most to its superior performance. Lastly, we apply our new understanding of IL methods to the problem of state-marginal matching, where we demonstrate that in simulated arm pushing environments we can teach agents a diverse range of behaviours using simply hand-specified state distributions and no reward functions or expert demonstrations. For datasets and reproducing results please refer to https://github.com/KamyarGh/rl_swiss/blob/master/reproducing/fmax_paper.md .

READ FULL TEXT

Authors

page 8

page 19

06/18/2020

Reparameterized Variational Divergence Minimization for Stable Imitation

While recent state-of-the-art results for adversarial imitation-learning...
09/15/2019

VILD: Variational Imitation Learning with Diverse-quality Demonstrations

The goal of imitation learning (IL) is to learn a good policy from high-...
06/30/2022

Watch and Match: Supercharging Imitation with Regularized Optimal Transport

Imitation learning holds tremendous promise in learning policies efficie...
10/02/2018

Video Imitation GAN: Learning control policies by imitating raw videos using generative adversarial reward estimation

Natural imitation in humans usually consists of mimicking visual demonst...
02/04/2022

SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching

We propose State Matching Offline DIstribution Correction Estimation (SM...
08/20/2020

Imitation Learning with Sinkhorn Distances

Imitation learning algorithms have been interpreted as variants of diver...
03/06/2022

MIRROR: Differentiable Deep Social Projection for Assistive Human-Robot Communication

Communication is a hallmark of intelligence. In this work, we present MI...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.