Learning Robotic Manipulation through Visual Planning and Acting

05/11/2019
by   Angelina Wang, et al.
15

Planning for robotic manipulation requires reasoning about the changes a robot can affect on objects. When such interactions can be modelled analytically, as in domains with rigid objects, efficient planning algorithms exist. However, in both domestic and industrial domains, the objects of interest can be soft, or deformable, and hard to model analytically. For such cases, we posit that a data-driven modelling approach is more suitable. In recent years, progress in deep generative models has produced methods that learn to `imagine' plausible images from data. Building on the recent Causal InfoGAN generative model, in this work we learn to imagine goal-directed object manipulation directly from raw image data of self-supervised interaction of the robot with the object. After learning, given a goal observation of the system, our model can generate an imagined plan -- a sequence of images that transition the object into the desired goal. To execute the plan, we use it as a reference trajectory to track with a visual servoing controller, which we also learn from the data as an inverse dynamics model. In a simulated manipulation task, we show that separating the problem into visual planning and visual tracking control is more sample efficient and more interpretable than alternative data-driven approaches. We further demonstrate our approach on learning to imagine and execute in 3 environments, the final of which is deformable rope manipulation on a PR2 robot.

READ FULL TEXT

page 1

page 6

page 7

page 8

page 9

page 13

research
07/24/2018

Learning Plannable Representations with Causal InfoGAN

In recent years, deep generative models have been shown to 'imagine' con...
research
03/06/2017

Combining Self-Supervised Learning and Imitation for Vision-Based Rope Manipulation

Manipulation of deformable objects, such as ropes and cloth, is an impor...
research
12/03/2018

Visual Foresight: Model-Based Deep Reinforcement Learning for Vision-Based Robotic Control

Deep reinforcement learning (RL) algorithms can learn complex robotic sk...
research
02/27/2020

Hallucinative Topological Memory for Zero-Shot Visual Planning

In visual planning (VP), an agent learns to plan goal-directed behavior ...
research
09/14/2023

Learning Quasi-Static 3D Models of Markerless Deformable Linear Objects for Bimanual Robotic Manipulation

The robotic manipulation of Deformable Linear Objects (DLOs) is a vital ...
research
09/19/2023

Multi-Object Graph Affordance Network: Enabling Goal-Oriented Planning through Compound Object Affordances

Learning object affordances is an effective tool in the field of robot l...
research
03/24/2022

Augment-Connect-Explore: a Paradigm for Visual Action Planning with Data Scarcity

Visual action planning particularly excels in applications where the sta...

Please sign up or login with your details

Forgot password? Click here to reset