A Connection between Generative Adversarial Networks, Inverse Reinforcement Learning, and Energy-Based Models

by   Chelsea Finn, et al.

Generative adversarial networks (GANs) are a recently proposed class of generative models in which a generator is trained to optimize a cost function that is being simultaneously learned by a discriminator. While the idea of learning cost functions is relatively new to the field of generative modeling, learning costs has long been studied in control and reinforcement learning (RL) domains, typically for imitation learning from demonstrations. In these fields, learning cost function underlying observed behavior is known as inverse reinforcement learning (IRL) or inverse optimal control. While at first the connection between cost learning in RL and cost learning in generative modeling may appear to be a superficial one, we show in this paper that certain IRL methods are in fact mathematically equivalent to GANs. In particular, we demonstrate an equivalence between a sample-based algorithm for maximum entropy IRL and a GAN in which the generator's density can be evaluated and is provided as an additional input to the discriminator. Interestingly, maximum entropy IRL is a special case of an energy-based model. We discuss the interpretation of GANs as an algorithm for training energy-based models, and relate this interpretation to other recent work that seeks to connect GANs and EBMs. By formally highlighting the connection between GANs, IRL, and EBMs, we hope that researchers in all three communities can better identify and apply transferable ideas from one domain to another, particularly for developing more stable and scalable algorithms: a major challenge in all three domains.


page 1

page 2

page 3

page 4


Connecting Generative Adversarial Networks and Actor-Critic Methods

Both generative adversarial networks (GAN) in unsupervised learning and ...

Generative Adversarial Imitation Learning

Consider learning a policy from example expert behavior, without interac...

Learning to Optimize via Wasserstein Deep Inverse Optimal Control

We study the inverse optimal control problem in social sciences: we aim ...

Imitation learning based on entropy-regularized forward and inverse reinforcement learning

This paper proposes Entropy-Regularized Imitation Learning (ERIL), which...

Objective-Reinforced Generative Adversarial Networks (ORGAN) for Sequence Generation Models

In unsupervised data generation tasks, besides the generation of a sampl...

Energy-Based Sequence GANs for Recommendation and Their Connection to Imitation Learning

Recommender systems aim to find an accurate and efficient mapping from h...

Detecting Deceptive Reviews using Generative Adversarial Networks

In the past few years, consumer review sites have become the main target...

Please sign up or login with your details

Forgot password? Click here to reset