Watch and Match: Supercharging Imitation with Regularized Optimal Transport

06/30/2022
by   Siddhant Haldar, et al.
7

Imitation learning holds tremendous promise in learning policies efficiently for complex decision making problems. Current state-of-the-art algorithms often use inverse reinforcement learning (IRL), where given a set of expert demonstrations, an agent alternatively infers a reward function and the associated optimal policy. However, such IRL approaches often require substantial online interactions for complex control problems. In this work, we present Regularized Optimal Transport (ROT), a new imitation learning algorithm that builds on recent advances in optimal transport based trajectory-matching. Our key technical insight is that adaptively combining trajectory-matching rewards with behavior cloning can significantly accelerate imitation even with only a few demonstrations. Our experiments on 20 visual control tasks across the DeepMind Control Suite, the OpenAI Robotics Suite, and the Meta-World Benchmark demonstrate an average of 7.8X faster imitation to reach 90 expert performance compared to prior state-of-the-art methods. On real-world robotic manipulation, with just one demonstration and an hour of online training, ROT achieves an average success rate of 90.1

READ FULL TEXT

page 1

page 6

page 7

page 18

page 19

page 21

page 22

page 23

06/19/2019

Wasserstein Adversarial Imitation Learning

Imitation Learning describes the problem of recovering an expert policy ...
04/07/2022

Imitating, Fast and Slow: Robust learning from demonstrations via decision-time planning

The goal of imitation learning is to mimic expert behavior from demonstr...
04/02/2021

Learning Online from Corrective Feedback: A Meta-Algorithm for Robotics

A key challenge in Imitation Learning (IL) is that optimal state actions...
06/10/2022

Imitation Learning via Differentiable Physics

Existing imitation learning (IL) methods such as inverse reinforcement l...
11/06/2019

A Divergence Minimization Perspective on Imitation Learning Methods

In many settings, it is desirable to learn decision-making and control p...
05/07/2021

CoDE: Collocation for Demonstration Encoding

Roboticists frequently turn to Imitation learning (IL) for data efficien...
08/20/2020

Imitation Learning with Sinkhorn Distances

Imitation learning algorithms have been interpreted as variants of diver...