Learning to See before Learning to Act: Visual Pre-training for Manipulation

07/01/2021
by   Lin Yen-Chen, et al.
4

Does having visual priors (e.g. the ability to detect objects) facilitate learning to perform vision-based manipulation (e.g. picking up objects)? We study this problem under the framework of transfer learning, where the model is first trained on a passive vision task, and adapted to perform an active manipulation task. We find that pre-training on vision tasks significantly improves generalization and sample efficiency for learning to manipulate objects. However, realizing these gains requires careful selection of which parts of the model to transfer. Our key insight is that outputs of standard vision models highly correlate with affordance maps commonly used in manipulation. Therefore, we explore directly transferring model parameters from vision networks to affordance prediction networks, and show that this can result in successful zero-shot adaptation, where a robot can pick up certain objects with zero robotic experience. With just a small amount of robotic experience, we can further fine-tune the affordance model to achieve better results. With just 10 minutes of suction experience or 1 hour of grasping experience, our method achieves  80

READ FULL TEXT

page 1

page 3

page 4

page 6

research
04/21/2020

Efficient Adaptation for End-to-End Vision-Based Robotic Manipulation

One of the great promises of robot learning systems is that they will be...
research
06/27/2018

QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation

In this paper, we study the problem of learning vision-based dynamic man...
research
07/28/2020

KOVIS: Keypoint-based Visual Servoing with Zero-Shot Sim-to-Real Transfer for Robotics Manipulation

We present KOVIS, a novel learning-based, calibration-free visual servoi...
research
06/20/2023

RoboCat: A Self-Improving Foundation Agent for Robotic Manipulation

The ability to leverage heterogeneous robotic experience from different ...
research
03/15/2020

Active Perception and Representation for Robotic Manipulation

The vast majority of visual animals actively control their eyes, heads, ...
research
10/03/2022

CLIP2Point: Transfer CLIP to Point Cloud Classification with Image-Depth Pre-training

Pre-training across 3D vision and language remains under development bec...
research
03/15/2022

Vision-Based Manipulators Need to Also See from Their Hands

We study how the choice of visual perspective affects learning and gener...

Please sign up or login with your details

Forgot password? Click here to reset