Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching

10/03/2017
by   Andy Zeng, et al.
0

This paper presents a robotic pick-and-place system that is capable of grasping and recognizing both known and novel objects in cluttered environments. The key new feature of the system is that it handles a wide range of object categories without needing any task-specific training data for novel objects. To achieve this, it first uses a category-agnostic affordance prediction algorithm to select among four different grasping primitive behaviors. It then recognizes picked objects with a cross-domain image classification framework that matches observed images to product images. Since product images are readily available for a wide range of objects (e.g., from the web), the system works out-of-the-box for novel objects without requiring any additional training data. Exhaustive experimental results demonstrate that our multi-affordance grasping achieves high success rates for a wide variety of objects in clutter, and our recognition algorithm achieves high accuracy for both known and novel grasped objects. The approach was part of the MIT-Princeton Team system that took 1st place in the stowing task at the 2017 Amazon Robotics Challenge. All code, datasets, and pre-trained models are available online at http://arc.cs.princeton.edu

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

research
01/28/2023

Towards Precise Model-free Robotic Grasping with Sim-to-Real Transfer Learning

Precise robotic grasping of several novel objects is a huge challenge in...
research
03/27/2018

Learning Synergies between Pushing and Grasping with Self-supervised Deep Reinforcement Learning

Skilled robotic manipulation benefits from complex synergies between non...
research
11/27/2021

GATER: Learning Grasp-Action-Target Embeddings and Relations for Task-Specific Grasping

Intelligent service robots require the ability to perform a variety of t...
research
11/15/2021

Semantically Grounded Object Matching for Robust Robotic Scene Rearrangement

Object rearrangement has recently emerged as a key competency in robot m...
research
12/04/2022

Learning Bifunctional Push-grasping Synergistic Strategy for Goal-agnostic and Goal-oriented Tasks

Both goal-agnostic and goal-oriented tasks have practical value for robo...
research
07/24/2023

simPLE: a visuotactile method learned in simulation to precisely pick, localize, regrasp, and place objects

Existing robotic systems have a clear tension between generality and pre...
research
03/12/2020

A Multi-task Learning Framework for Grasping-Position Detection and Few-Shot Classification

It is a big problem that a model of deep learning for a picking robot ne...

Please sign up or login with your details

Forgot password? Click here to reset