State Alignment-based Imitation Learning

11/21/2019
by   Fangchen Liu, et al.
0

Consider an imitation learning problem that the imitator and the expert have different dynamics models. Most of the current imitation learning methods fail because they focus on imitating actions. We propose a novel state alignment-based imitation learning method to train the imitator to follow the state sequences in expert demonstrations as much as possible. The state alignment comes from both local and global perspectives and we combine them into a reinforcement learning framework by a regularized policy update objective. We show the superiority of our method on standard imitation learning settings and imitation learning settings where the expert and imitator have different dynamics models.

READ FULL TEXT
research
08/10/2021

Imitation Learning by Reinforcement Learning

Imitation Learning algorithms learn a policy from demonstrations of expe...
research
11/04/2022

Deconfounded Imitation Learning

Standard imitation learning can fail when the expert demonstrators have ...
research
01/08/2020

The Past and Present of Imitation Learning: A Citation Chain Study

Imitation Learning is a promising area of active research. Over the last...
research
10/20/2020

Robust Imitation Learning from Noisy Demonstrations

Learning from noisy demonstrations is a practical but highly challenging...
research
04/15/2021

Skeletal Feature Compensation for Imitation Learning with Embodiment Mismatch

Learning from demonstrations in the wild (e.g. YouTube videos) is a tant...
research
04/07/2020

State-Only Imitation Learning for Dexterous Manipulation

Dexterous manipulation has been a long-standing challenge in robotics. R...
research
07/09/2019

Ranking-Based Reward Extrapolation without Rankings

The performance of imitation learning is typically upper-bounded by the ...

Please sign up or login with your details

Forgot password? Click here to reset