Learning Rope Manipulation Policies Using Dense Object Descriptors Trained on Synthetic Depth Data

03/03/2020
by   Priya Sundaresan, et al.
3

Robotic manipulation of deformable 1D objects such as ropes, cables, and hoses is challenging due to the lack of high-fidelity analytic models and large configuration spaces. Furthermore, learning end-to-end manipulation policies directly from images and physical interaction requires significant time on a robot and can fail to generalize across tasks. We address these challenges using interpretable deep visual representations for rope, extending recent work on dense object descriptors for robot manipulation. This facilitates the design of interpretable and transferable geometric policies built on top of the learned representations, decoupling visual reasoning and control. We present an approach that learns point-pair correspondences between initial and goal rope configurations, which implicitly encodes geometric structure, entirely in simulation from synthetic depth images. We demonstrate that the learned representation – dense depth object descriptors (DDODs) – can be used to manipulate a real rope into a variety of different arrangements either by learning from demonstrations or using interpretable geometric policies. In 50 trials of a knot-tying task with the ABB YuMi Robot, the system achieves a 66 knot-tying success rate from previously unseen configurations. See https://tinyurl.com/rope-learning for supplementary material and videos.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 9

page 11

page 13

page 14

research
03/28/2020

Learning to Smooth and Fold Real Fabric Using Dense Object Descriptors Trained on Synthetic Color Images

Robotic fabric manipulation is challenging due to the infinite dimension...
research
06/22/2018

Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic Manipulation

What is the right object representation for manipulation? We would like ...
research
06/29/2021

Untangling Dense Non-Planar Knots by Learning Manipulation Features and Recovery Policies

Robot manipulation for untangling 1D deformable structures such as ropes...
research
11/10/2020

Untangling Dense Knots by Learning Task-Relevant Keypoints

Untangling ropes, wires, and cables is a challenging task for robots due...
research
10/09/2020

MMGSD: Multi-Modal Gaussian Shape Descriptors for Correspondence Matching in 1D and 2D Deformable Objects

We explore learning pixelwise correspondences between images of deformab...
research
03/18/2019

Learning to Augment Synthetic Images for Sim2Real Policy Transfer

Vision and learning have made significant progress that could improve ro...
research
06/16/2022

Equivariant Descriptor Fields: SE(3)-Equivariant Energy-Based Models for End-to-End Visual Robotic Manipulation Learning

End-to-end learning for visual robotic manipulation is known to suffer f...

Please sign up or login with your details

Forgot password? Click here to reset