DOLPHINS: Dataset for Collaborative Perception enabled Harmonious and Interconnected Self-driving

07/15/2022
by   Ruiqing Mao, et al.
0

Vehicle-to-Everything (V2X) network has enabled collaborative perception in autonomous driving, which is a promising solution to the fundamental defect of stand-alone intelligence including blind zones and long-range perception. However, the lack of datasets has severely blocked the development of collaborative perception algorithms. In this work, we release DOLPHINS: Dataset for cOllaborative Perception enabled Harmonious and INterconnected Self-driving, as a new simulated large-scale various-scenario multi-view multi-modality autonomous driving dataset, which provides a ground-breaking benchmark platform for interconnected autonomous driving. DOLPHINS outperforms current datasets in six dimensions: temporally-aligned images and point clouds from both vehicles and Road Side Units (RSUs) enabling both Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) based collaborative perception; 6 typical scenarios with dynamic weather conditions make the most various interconnected autonomous driving dataset; meticulously selected viewpoints providing full coverage of the key areas and every object; 42376 frames and 292549 objects, as well as the corresponding 3D annotations, geo-positions, and calibrations, compose the largest dataset for collaborative perception; Full-HD images and 64-line LiDARs construct high-resolution data with sufficient details; well-organized APIs and open-source codes ensure the extensibility of DOLPHINS. We also construct a benchmark of 2D detection, 3D detection, and multi-view collaborative perception tasks on DOLPHINS. The experiment results show that the raw-level fusion scheme through V2X communication can help to improve the precision as well as to reduce the necessity of expensive LiDAR equipment on vehicles when RSUs exist, which may accelerate the popularity of interconnected self-driving vehicles. DOLPHINS is now available on https://dolphins-dataset.net/.

READ FULL TEXT

page 4

page 6

page 8

research
02/17/2022

V2X-Sim: A Virtual Collaborative Perception Dataset for Autonomous Driving

Vehicle-to-everything (V2X), which denotes the collaboration between a v...
research
02/11/2022

Cyclops: Open Platform for Scale Truck Platooning

Cyclops, introduced in this paper, is an open research platform for ever...
research
12/15/2020

Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data Augmentation

Holistically understanding an object and its 3D movable parts through vi...
research
09/16/2021

OPV2V: An Open Benchmark Dataset and Fusion Pipeline for Perception with Vehicle-to-Vehicle Communication

Employing Vehicle-to-Vehicle communication to enhance perception perform...
research
09/12/2023

AmodalSynthDrive: A Synthetic Amodal Perception Dataset for Autonomous Driving

Unlike humans, who can effortlessly estimate the entirety of objects eve...
research
08/31/2023

Towards Vehicle-to-everything Autonomous Driving: A Survey on Collaborative Perception

Vehicle-to-everything (V2X) autonomous driving opens up a promising dire...
research
09/11/2023

HiLM-D: Towards High-Resolution Understanding in Multimodal Large Language Models for Autonomous Driving

Autonomous driving systems generally employ separate models for differen...

Please sign up or login with your details

Forgot password? Click here to reset