DeepAI AI Chat
Log In Sign Up

State Alignment-based Imitation Learning

by   Fangchen Liu, et al.

Consider an imitation learning problem that the imitator and the expert have different dynamics models. Most of the current imitation learning methods fail because they focus on imitating actions. We propose a novel state alignment-based imitation learning method to train the imitator to follow the state sequences in expert demonstrations as much as possible. The state alignment comes from both local and global perspectives and we combine them into a reinforcement learning framework by a regularized policy update objective. We show the superiority of our method on standard imitation learning settings and imitation learning settings where the expert and imitator have different dynamics models.


Imitation Learning by Reinforcement Learning

Imitation Learning algorithms learn a policy from demonstrations of expe...

Deconfounded Imitation Learning

Standard imitation learning can fail when the expert demonstrators have ...

The Past and Present of Imitation Learning: A Citation Chain Study

Imitation Learning is a promising area of active research. Over the last...

Robust Imitation Learning from Noisy Demonstrations

Learning from noisy demonstrations is a practical but highly challenging...

Skeletal Feature Compensation for Imitation Learning with Embodiment Mismatch

Learning from demonstrations in the wild (e.g. YouTube videos) is a tant...

State-Only Imitation Learning for Dexterous Manipulation

Dexterous manipulation has been a long-standing challenge in robotics. R...

Ranking-Based Reward Extrapolation without Rankings

The performance of imitation learning is typically upper-bounded by the ...