RoboTAP: Tracking Arbitrary Points for Few-Shot Visual Imitation

08/30/2023
by   Mel Večerík, et al.
0

For robots to be useful outside labs and specialized factories we need a way to teach them new useful behaviors quickly. Current approaches lack either the generality to onboard new tasks without task-specific engineering, or else lack the data-efficiency to do so in an amount of time that enables practical use. In this work we explore dense tracking as a representational vehicle to allow faster and more general learning from demonstration. Our approach utilizes Track-Any-Point (TAP) models to isolate the relevant motion in a demonstration, and parameterize a low-level controller to reproduce this motion across changes in the scene configuration. We show this results in robust robot policies that can solve complex object-arrangement tasks such as shape-matching, stacking, and even full path-following tasks such as applying glue and sticking objects together, all from demonstrations that can be collected in minutes.

READ FULL TEXT

page 1

page 6

page 7

page 10

page 14

page 15

page 16

research
10/25/2018

One-Shot Hierarchical Imitation Learning of Compound Visuomotor Tasks

We consider the problem of learning multi-stage vision-based tasks on a ...
research
03/21/2017

One-Shot Imitation Learning

Imitation learning has been commonly applied to solve different tasks in...
research
11/13/2019

Motion Reasoning for Goal-Based Imitation Learning

We address goal-based imitation learning, where the aim is to output the...
research
04/25/2022

Sparse-Dense Motion Modelling and Tracking for Manipulation without Prior Object Models

This work presents an approach for modelling and tracking previously uns...
research
09/28/2021

Learning Periodic Tasks from Human Demonstrations

We develop a method for learning periodic tasks from visual demonstratio...
research
11/07/2019

Benchmark for Skill Learning from Demonstration: Impact of User Experience, Task Complexity, and Start Configuration on Performance

In this work, we contribute a large-scale study benchmarking the perform...
research
05/17/2022

Conditional Visual Servoing for Multi-Step Tasks

Visual Servoing has been effectively used to move a robot into specific ...

Please sign up or login with your details

Forgot password? Click here to reset