MetaGraspNet: A Large-Scale Benchmark Dataset for Vision-driven Robotic Grasping via Physics-based Metaverse Synthesis

12/29/2021
by   Yuhao Chen, et al.
16

There has been increasing interest in smart factories powered by robotics systems to tackle repetitive, laborious tasks. One impactful yet challenging task in robotics-powered smart factory applications is robotic grasping: using robotic arms to grasp objects autonomously in different settings. Robotic grasping requires a variety of computer vision tasks such as object detection, segmentation, grasp prediction, pick planning, etc. While significant progress has been made in leveraging of machine learning for robotic grasping, particularly with deep learning, a big challenge remains in the need for large-scale, high-quality RGBD datasets that cover a wide diversity of scenarios and permutations. To tackle this big, diverse data problem, we are inspired by the recent rise in the concept of metaverse, which has greatly closed the gap between virtual worlds and the physical world. Metaverses allow us to create digital twins of real-world manufacturing scenarios and to virtually create different scenarios from which large volumes of data can be generated for training models. In this paper, we present MetaGraspNet: a large-scale benchmark dataset for vision-driven robotic grasping via physics-based metaverse synthesis. The proposed dataset contains 100,000 images and 25 different object types and is split into 5 difficulties to evaluate object detection and segmentation model performance in different grasping scenarios. We also propose a new layout-weighted performance metric alongside the dataset for evaluating object detection and segmentation performance in a manner that is more appropriate for robotic grasp applications compared to existing general-purpose performance metrics. Our benchmark dataset is available open-source on Kaggle, with the first phase consisting of detailed object detection, segmentation, layout annotations, and a layout-weighted performance metric script.

READ FULL TEXT

page 2

page 3

page 4

research
08/08/2022

MetaGraspNet: A Large-Scale Benchmark Dataset for Scene-Aware Ambidextrous Bin Picking via Physics-based Metaverse Synthesis

Autonomous bin picking poses significant challenges to vision-driven rob...
research
03/30/2018

Jacquard: A Large Scale Dataset for Robotic Grasp Detection

Grasping skill is a major ability that a wide number of real-life applic...
research
09/18/2023

Grasp-Anything: Large-scale Grasp Dataset from Foundation Models

Foundation models such as ChatGPT have made significant strides in robot...
research
03/23/2021

SuctionNet-1Billion: A Large-Scale Benchmark for Suction Grasping

Suction is an important solution for the longstanding robotic grasping p...
research
04/23/2021

Grasp Synthesis for Novel Objects Using Heuristic-based and Data-driven Active Vision Methods

In this work, we present several heuristic-based and data-driven active ...
research
04/29/2021

REGRAD: A Large-Scale Relational Grasp Dataset for Safe and Object-Specific Robotic Grasping in Clutter

Despite the impressive progress achieved in robust grasp detection, robo...
research
11/17/2016

The Freiburg Groceries Dataset

With the increasing performance of machine learning techniques in the la...

Please sign up or login with your details

Forgot password? Click here to reset