Log In Sign Up

Lipschitzness Is All You Need To Tame Off-policy Generative Adversarial Imitation Learning

by   Lionel Blondé, et al.

Despite the recent success of reinforcement learning in various domains, these approaches remain, for the most part, deterringly sensitive to hyper-parameters and are often riddled with essential engineering feats allowing their success. We consider the case of off-policy generative adversarial imitation learning, and perform an in-depth review, qualitative and quantitative, of the method. Crucially, we show that forcing the learned reward function to be local Lipschitz-continuous is a sine qua non condition for the method to perform well. We then study the effects of this necessary condition and provide several theoretical results involving the local Lipschitzness of the state-value function. Finally, we propose a novel reward-modulation technique inspired from a new interpretation of gradient-penalty regularization in reinforcement learning. Besides being extremely easy to implement and bringing little to no overhead, we show that our method provides improvements in several continuous control environments of the MuJoCo suite.


page 1

page 2

page 3

page 4


Generative Adversarial Self-Imitation Learning

This paper explores a simple regularizer for reinforcement learning by p...

Wasserstein Distance guided Adversarial Imitation Learning with Reward Shape Exploration

The generative adversarial imitation learning (GAIL) has provided an adv...

Hyperparameter Selection for Imitation Learning

We address the issue of tuning hyperparameters (HPs) for imitation learn...

Addressing reward bias in Adversarial Imitation Learning with neutral reward functions

Generative Adversarial Imitation Learning suffers from the fundamental p...

Relational Mimic for Visual Adversarial Imitation Learning

In this work, we introduce a new method for imitation learning from vide...

Saliency Prediction on Omnidirectional Images with Generative Adversarial Imitation Learning

When watching omnidirectional images (ODIs), subjects can access differe...

Convergence of Value Aggregation for Imitation Learning

Value aggregation is a general framework for solving imitation learning ...