MetaGraspNet: A Large-Scale Benchmark Dataset for Scene-Aware Ambidextrous Bin Picking via Physics-based Metaverse Synthesis

by   Maximilian Gilles, et al.

Autonomous bin picking poses significant challenges to vision-driven robotic systems given the complexity of the problem, ranging from various sensor modalities, to highly entangled object layouts, to diverse item properties and gripper types. Existing methods often address the problem from one perspective. Diverse items and complex bin scenes require diverse picking strategies together with advanced reasoning. As such, to build robust and effective machine-learning algorithms for solving this complex task requires significant amounts of comprehensive and high quality data. Collecting such data in real world would be too expensive and time prohibitive and therefore intractable from a scalability perspective. To tackle this big, diverse data problem, we take inspiration from the recent rise in the concept of metaverses, and introduce MetaGraspNet, a large-scale photo-realistic bin picking dataset constructed via physics-based metaverse synthesis. The proposed dataset contains 217k RGBD images across 82 different article types, with full annotations for object detection, amodal perception, keypoint detection, manipulation order and ambidextrous grasp labels for a parallel-jaw and vacuum gripper. We also provide a real dataset consisting of over 2.3k fully annotated high-quality RGBD images, divided into 5 levels of difficulties and an unseen object set to evaluate different object and layout properties. Finally, we conduct extensive experiments showing that our proposed vacuum seal model and synthetic dataset achieves state-of-the-art performance and generalizes to real world use-cases.


page 1

page 5

page 6


MetaGraspNet: A Large-Scale Benchmark Dataset for Vision-driven Robotic Grasping via Physics-based Metaverse Synthesis

There has been increasing interest in smart factories powered by robotic...

ARMBench: An Object-centric Benchmark Dataset for Robotic Manipulation

This paper introduces Amazon Robotic Manipulation Benchmark (ARMBench), ...

GraspNet: A Large-Scale Clustered and Densely Annotated Dataset for Object Grasping

Object grasping is critical for many applications, which is also a chall...

KeypointNet: A Large-scale 3D Keypoint Dataset Aggregated from Numerous Human Annotations

Detecting 3D objects keypoints is of great interest to the areas of both...

What did you Mention? A Large Scale Mention Detection Benchmark for Spoken and Written Text

We describe a large, high-quality benchmark for the evaluation of Mentio...

Revisiting Shadow Detection: A New Benchmark Dataset for Complex World

Shadow detection in general photos is a nontrivial problem, due to the c...

A Comparative Study on Collecting High-Quality Implicit Reasonings at a Large-scale

Explicating implicit reasoning (i.e. warrants) in arguments is a long-st...

Please sign up or login with your details

Forgot password? Click here to reset