Primal Wasserstein Imitation Learning

06/08/2020
by   Robert Dadashi, et al.
0

Imitation Learning (IL) methods seek to match the behavior of an agent with that of an expert. In the present work, we propose a new IL method based on a conceptually simple algorithm: Primal Wasserstein Imitation Learning (PWIL), which ties to the primal form of the Wasserstein distance between the expert and the agent state-action distributions. We present a reward function which is derived offline, as opposed to recent adversarial IL algorithms that learn a reward function through interactions with the environment, and which requires little fine-tuning. We show that we can recover expert behavior on a variety of continuous control tasks of the MuJoCo domain in a sample efficient manner in terms of agent interactions and of expert interactions with the environment. Finally, we show that the behavior of the agent we train matches the behavior of the expert with the Wasserstein distance, rather than the commonly used proxy of performance.

READ FULL TEXT
research
04/06/2019

Reinforced Imitation in Heterogeneous Action Space

Imitation learning is an effective alternative approach to learn a polic...
research
11/09/2020

f-IRL: Inverse Reinforcement Learning via State Marginal Matching

Imitation learning is well-suited for robotic tasks where it is difficul...
research
11/23/2021

Sample Efficient Imitation Learning via Reward Function Trained in Advance

Imitation learning (IL) is a framework that learns to imitate expert beh...
research
11/08/2022

ABC: Adversarial Behavioral Cloning for Offline Mode-Seeking Imitation Learning

Given a dataset of expert agent interactions with an environment of inte...
research
06/18/2019

RadGrad: Active learning with loss gradients

Solving sequential decision prediction problems, including those in imit...
research
02/05/2022

Rethinking ValueDice: Does It Really Improve Performance?

Since the introduction of GAIL, adversarial imitation learning (AIL) met...
research
06/19/2023

SeMAIL: Eliminating Distractors in Visual Imitation via Separated Models

Model-based imitation learning (MBIL) is a popular reinforcement learnin...

Please sign up or login with your details

Forgot password? Click here to reset