QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation

06/27/2018
by   Dmitry Kalashnikov, et al.
0

In this paper, we study the problem of learning vision-based dynamic manipulation skills using a scalable reinforcement learning approach. We study this problem in the context of grasping, a longstanding challenge in robotic manipulation. In contrast to static learning behaviors that choose a grasp point and then execute the desired grasp, our method enables closed-loop vision-based control, whereby the robot continuously updates its grasp strategy based on the most recent observations to optimize long-horizon grasp success. To that end, we introduce QT-Opt, a scalable self-supervised vision-based reinforcement learning framework that can leverage over 580k real-world grasp attempts to train a deep neural network Q-function with over 1.2M parameters to perform closed-loop, real-world grasping that generalizes to 96 on unseen objects. Aside from attaining a very high success rate, our method exhibits behaviors that are quite distinct from more standard grasping systems: using only RGB vision-based perception from an over-the-shoulder camera, our method automatically learns regrasping strategies, probes objects to find the most effective grasps, learns to reposition objects and perform other non-prehensile pre-grasp manipulations, and responds dynamically to disturbances and perturbations.

READ FULL TEXT

page 1

page 2

page 8

page 13

page 17

research
04/06/2021

Attribute-Based Robotic Grasping with One-Grasp Adaptation

Robotic grasping is one of the most fundamental robotic manipulation tas...
research
07/31/2023

Deep Reinforcement Learning of Dexterous Pre-grasp Manipulation for Human-like Functional Categorical Grasping

Many objects such as tools and household items can be used only if grasp...
research
09/10/2019

MAT: Multi-Fingered Adaptive Tactile Grasping via Deep Reinforcement Learning

Vision-based grasping systems typically adopt an open-loop execution of ...
research
03/03/2021

Design of an Affordable Prosthetic Arm Equipped with Deep Learning Vision-Based Manipulation

Many amputees throughout the world are left with limited options to pers...
research
05/12/2023

Vision and Control for Grasping Clear Plastic Bags

We develop two novel vision methods for planning effective grasps for cl...
research
05/17/2019

REPLAB: A Reproducible Low-Cost Arm Benchmark Platform for Robotic Learning

Standardized evaluation measures have aided in the progress of machine l...
research
07/01/2021

Learning to See before Learning to Act: Visual Pre-training for Manipulation

Does having visual priors (e.g. the ability to detect objects) facilitat...

Please sign up or login with your details

Forgot password? Click here to reset