TossingBot: Learning to Throw Arbitrary Objects with Residual Physics

03/27/2019
by   Andy Zeng, et al.
8

We investigate whether a robot arm can learn to pick and throw arbitrary objects into selected boxes quickly and accurately. Throwing has the potential to increase the physical reachability and picking speed of a robot arm. However, precisely throwing arbitrary objects in unstructured settings presents many challenges: from acquiring reliable pre-throw conditions (e.g. initial pose of object in manipulator) to handling varying object-centric properties (e.g. mass distribution, friction, shape) and dynamics (e.g. aerodynamics). In this work, we propose an end-to-end formulation that jointly learns to infer control parameters for grasping and throwing motion primitives from visual observations (images of arbitrary objects in a bin) through trial and error. Within this formulation, we investigate the synergies between grasping and throwing (i.e., learning grasps that enable more accurate throws) and between simulation and deep learning (i.e., using deep networks to predict residuals on top of control parameters predicted by a physics simulator). The resulting system, TossingBot, is able to grasp and throw arbitrary objects into boxes located outside its maximum reach range at 500+ mean picks per hour (600+ grasps per hour with 85 target locations. Videos are available at https://tossingbot.cs.princeton.edu

READ FULL TEXT

page 1

page 2

page 3

page 6

page 11

page 12

research
09/04/2019

Directional Semantic Grasping of Real-World Objects: From Simulation to Reality

We present a deep reinforcement learning approach to grasp semantically ...
research
03/18/2021

Dynamic Grasping with Reachability and Motion Awareness

Grasping in dynamic environments presents a unique set of challenges. A ...
research
10/13/2016

Predicting the dynamics of 2d objects with a deep residual network

We investigate how a residual network can learn to predict the dynamics ...
research
10/14/2019

Learning to Generate 6-DoF Grasp Poses with Reachability Awareness

Motivated by the stringent requirements of unstructured real-world where...
research
07/31/2022

DA^2 Dataset: Toward Dexterity-Aware Dual-Arm Grasping

In this paper, we introduce DA^2, the first large-scale dual-arm dexteri...
research
11/02/2021

Simulation of Parallel-Jaw Grasping using Incremental Potential Contact Models

Soft compliant jaw tips are almost universally used with parallel-jaw ro...
research
03/10/2020

Learning a generative model for robot control using visual feedback

We introduce a novel formulation for incorporating visual feedback in co...

Please sign up or login with your details

Forgot password? Click here to reset