Loss-annealed GAIL for sample efficient and stable Imitation Learning

01/21/2020
by   Rohit Jena, et al.
8

Imitation learning is the problem of learning a policy from an expert policy without access to a reward signal. Often, the expert policy is only available in the form of expert demonstrations. Behavior cloning and GAIL are two popularly used methods for performing imitation learning in this setting. Behavior cloning converges in a few training iterations, but doesn't reach peak performance and suffers from compounding errors due to its supervised training framework and iid assumption. GAIL attempts to tackle this problem by accounting for the temporal dependencies between states while matching occupancy measures of the expert and the policy. Although GAIL has shown successes in a number of environments, it takes a lot of environment interactions. Given their complementary benefits, existing methods have mentioned trying or tried to combine the two methods, without much success. We look at some of the limitations of existing ideas that try to combine BC and GAIL, and present an algorithm that combines the best of both worlds to enable faster and stable training while not compromising on performance. Our algorithm is embarrassingly simple to implement and seamlessly integrates with different policy gradient algorithms. We demonstrate the effectiveness of the algorithm both in low dimensional control tasks in a limited data setting, and in high dimensional grid world environments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/16/2023

Sample-Efficient On-Policy Imitation Learning from Observations

Imitation learning from demonstrations (ILD) aims to alleviate numerous ...
research
11/08/2021

Off-policy Imitation Learning from Visual Inputs

Recently, various successful applications utilizing expert states in imi...
research
05/23/2022

Data augmentation for efficient learning from parametric experts

We present a simple, yet powerful data-augmentation technique to enable ...
research
05/26/2021

Provable Representation Learning for Imitation with Contrastive Fourier Features

In imitation learning, it is common to learn a behavior policy to match ...
research
06/12/2022

Case-Based Inverse Reinforcement Learning Using Temporal Coherence

Providing expert trajectories in the context of Imitation Learning is of...
research
05/06/2022

Diverse Imitation Learning via Self-Organizing Generative Models

Imitation learning is the task of replicating expert policy from demonst...
research
09/18/2020

Compressed imitation learning

In analogy to compressed sensing, which allows sample-efficient signal r...

Please sign up or login with your details

Forgot password? Click here to reset