Robust Imitation Learning from Noisy Demonstrations

by   Voot Tangkaratt, et al.

Learning from noisy demonstrations is a practical but highly challenging problem in imitation learning. In this paper, we first theoretically show that robust imitation learning can be achieved by optimizing a classification risk with a symmetric loss. Based on this theoretical finding, we then propose a new imitation learning method that optimizes the classification risk by effectively combining pseudo-labeling with co-training. Unlike existing methods, our method does not require additional labels or strict assumptions about noise distributions. Experimental results on continuous-control benchmarks show that our method is more robust compared to state-of-the-art methods.



page 1

page 2

page 3

page 4


State Alignment-based Imitation Learning

Consider an imitation learning problem that the imitator and the expert ...

Robust Imitation Learning from Corrupted Demonstrations

We consider offline Imitation Learning from corrupted demonstrations whe...

MILP-based Imitation Learning for HVAC control

To optimize the operation of a HVAC system with advanced techniques such...

An Algorithmic Perspective on Imitation Learning

As robots and other intelligent agents move from simple environments and...

Maximum Causal Tsallis Entropy Imitation Learning

In this paper, we propose a novel maximum causal Tsallis entropy (MCTE) ...

Action Assembly: Sparse Imitation Learning for Text Based Games with Combinatorial Action Spaces

We propose a computationally efficient algorithm that combines compresse...

Data Driven Aircraft Trajectory Prediction with Deep Imitation Learning

The current Air Traffic Management (ATM) system worldwide has reached it...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.