You Only Demonstrate Once: Category-Level Manipulation from Single Visual Demonstration

01/30/2022
by   Bowen Wen, et al.
6

Promising results have been achieved recently in category-level manipulation that generalizes across object instances. Nevertheless, it often requires expensive real-world data collection and manual specification of semantic keypoints for each object category and task. Additionally, coarse keypoint predictions and ignoring intermediate action sequences hinder adoption in complex manipulation tasks beyond pick-and-place. This work proposes a novel, category-level manipulation framework that leverages an object-centric, category-level representation and model-free 6 DoF motion tracking. The canonical object representation is learned solely in simulation and then used to parse a category-level, task trajectory from a single demonstration video. The demonstration is reprojected to a target trajectory tailored to a novel object via the canonical representation. During execution, the manipulation horizon is decomposed into long-range, collision-free motion and last-inch manipulation. For the latter part, a category-level behavior cloning (CatBC) method leverages motion tracking to perform closed-loop control. CatBC follows the target trajectory, projected from the demonstration and anchored to a dynamically selected category-level coordinate frame. The frame is automatically selected along the manipulation horizon by a local attention mechanism. This framework allows to teach different manipulation strategies by solely providing a single demonstration, without complicated manual programming. Extensive experiments demonstrate its efficacy in a range of challenging industrial tasks in high-precision assembly, which involve learning complex, long-horizon policies. The process exhibits robustness against uncertainty due to dynamics as well as generalization across object instances and scene configurations.

READ FULL TEXT

page 16

page 17

page 18

page 19

page 20

page 21

page 22

page 23

research
09/19/2021

CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation

Task-relevant grasping is critical for industrial assembly, where downst...
research
03/15/2019

kPAM: KeyPoint Affordances for Category-Level Robotic Manipulation

We would like robots to achieve purposeful manipulation by placing any i...
research
02/11/2021

kPAM 2.0: Feedback Control for Category-Level Robotic Manipulation

In this paper, we explore generalizable, perception-to-action robotic ma...
research
06/26/2021

Vision-driven Compliant Manipulation for Reliable, High-Precision Assembly Tasks

Highly constrained manipulation tasks continue to be challenging for aut...
research
12/09/2021

Neural Descriptor Fields: SE(3)-Equivariant Object Representations for Manipulation

We present Neural Descriptor Fields (NDFs), an object representation tha...
research
10/18/2016

Semantic Decomposition and Recognition of Long and Complex Manipulation Action Sequences

Understanding continuous human actions is a non-trivial but important pr...
research
10/14/2022

Abstract-to-Executable Trajectory Translation for One-Shot Task Generalization

Training long-horizon robotic policies in complex physical environments ...

Please sign up or login with your details

Forgot password? Click here to reset