Stochastic Action Prediction for Imitation Learning

by   Sagar Gubbi Venkatesh, et al.

Imitation learning is a data-driven approach to acquiring skills that relies on expert demonstrations to learn a policy that maps observations to actions. When performing demonstrations, experts are not always consistent and might accomplish the same task in slightly different ways. In this paper, we demonstrate inherent stochasticity in demonstrations collected for tasks including line following with a remote-controlled car and manipulation tasks including reaching, pushing, and picking and placing an object. We model stochasticity in the data distribution using autoregressive action generation, generative adversarial nets, and variational prediction and compare the performance of these approaches. We find that accounting for stochasticity in the expert data leads to substantial improvement in the success rate of task completion.


Generative predecessor models for sample-efficient imitation learning

We propose Generative Predecessor Models for Imitation Learning (GPRIL),...

CRIL: Continual Robot Imitation Learning via Generative and Prediction Model

Imitation learning (IL) algorithms have shown promising results for robo...

Data Driven Aircraft Trajectory Prediction with Deep Imitation Learning

The current Air Traffic Management (ATM) system worldwide has reached it...

Task-Oriented Hand Motion Retargeting for Dexterous Manipulation Imitation

Human hand actions are quite complex, especially when they involve objec...

Multi-Instance Aware Localization for End-to-End Imitation Learning

Existing architectures for imitation learning using image-to-action poli...

Chain of Thought Imitation with Procedure Cloning

Imitation learning aims to extract high-performance policies from logged...

Stage Conscious Attention Network (SCAN) : A Demonstration-Conditioned Policy for Few-Shot Imitation

In few-shot imitation learning (FSIL), using behavioral cloning (BC) to ...