Motion Reasoning for Goal-Based Imitation Learning

11/13/2019
by   De-An Huang, et al.
105

We address goal-based imitation learning, where the aim is to output the symbolic goal from a third-person video demonstration. This enables the robot to plan for execution and reproduce the same goal in a completely different environment. The key challenge is that the goal of a video demonstration is often ambiguous at the level of semantic actions. The human demonstrators might unintentionally achieve certain subgoals in the demonstrations with their actions. Our main contribution is to propose a motion reasoning framework that combines task and motion planning to disambiguate the true intention of the demonstrator in the video demonstration. This allows us to robustly recognize the goals that cannot be disambiguated by previous action-based approaches. We evaluate our approach by collecting a dataset of 96 video demonstrations in a mockup kitchen environment. We show that our motion reasoning plays an important role in recognizing the actual goal of the demonstrator and improves the success rate by over 20 inferred goal from the video demonstration, our robot is able to reproduce the same task in a real kitchen environment.

READ FULL TEXT

page 1

page 3

page 4

page 5

page 6

research
11/04/2021

Learning to Manipulate Tools by Aligning Simulation to Video Demonstration

A seamless integration of robots into human environments requires robots...
research
05/22/2019

Imitation Learning from Video by Leveraging Proprioception

Classically, imitation learning algorithms have been developed for ideal...
research
11/21/2019

Third-Person Visual Imitation Learning via Decoupled Hierarchical Controller

We study a generalized setup for learning from demonstration to build an...
research
08/30/2023

RoboTAP: Tracking Arbitrary Points for Few-Shot Visual Imitation

For robots to be useful outside labs and specialized factories we need a...
research
02/24/2021

Learning to Shift Attention for Motion Generation

One challenge of motion generation using robot learning from demonstrati...
research
12/27/2022

Behavioral Cloning via Search in Video PreTraining Latent Space

Our aim is to build autonomous agents that can solve tasks in environmen...
research
11/26/2019

Imitation Learning of Robot Policies by Combining Language, Vision and Demonstration

In this work we propose a novel end-to-end imitation learning approach w...

Please sign up or login with your details

Forgot password? Click here to reset