Learning from Imperfect Demonstrations from Agents with Varying Dynamics

by   Zhangjie Cao, et al.

Imitation learning enables robots to learn from demonstrations. Previous imitation learning algorithms usually assume access to optimal expert demonstrations. However, in many real-world applications, this assumption is limiting. Most collected demonstrations are not optimal or are produced by an agent with slightly different dynamics. We therefore address the problem of imitation learning when the demonstrations can be sub-optimal or be drawn from agents with varying dynamics. We develop a metric composed of a feasibility score and an optimality score to measure how useful a demonstration is for imitation learning. The proposed score enables learning from more informative demonstrations, and disregarding the less relevant demonstrations. Our experiments on four environments in simulation and on a real robot show improved learned policies with higher expected return.


page 1

page 6

page 7

page 8


Confidence-Aware Imitation Learning from Demonstrations with Varying Optimality

Most existing imitation learning approaches assume the demonstrations ar...

Learning Feasibility to Imitate Demonstrators with Different Dynamics

The goal of learning from demonstrations is to learn a policy for an age...

Imitation Learning from Imperfect Demonstration

Imitation learning (IL) aims to learn an optimal policy from demonstrati...

ADAIL: Adaptive Adversarial Imitation Learning

We present the ADaptive Adversarial Imitation Learning (ADAIL) algorithm...

Disturbance-Injected Robust Imitation Learning with Task Achievement

Robust imitation learning using disturbance injections overcomes issues ...

Imitation Learning from Video by Leveraging Proprioception

Classically, imitation learning algorithms have been developed for ideal...

Metric-Based Imitation Learning Between Two Dissimilar Anthropomorphic Robotic Arms

The development of autonomous robotic systems that can learn from human ...