V2V4Real: A Real-world Large-scale Dataset for Vehicle-to-Vehicle Cooperative Perception

03/14/2023
by   Runsheng Xu, et al.
0

Modern perception systems of autonomous vehicles are known to be sensitive to occlusions and lack the capability of long perceiving range. It has been one of the key bottlenecks that prevents Level 5 autonomy. Recent research has demonstrated that the Vehicle-to-Vehicle (V2V) cooperative perception system has great potential to revolutionize the autonomous driving industry. However, the lack of a real-world dataset hinders the progress of this field. To facilitate the development of cooperative perception, we present V2V4Real, the first large-scale real-world multi-modal dataset for V2V perception. The data is collected by two vehicles equipped with multi-modal sensors driving together through diverse scenarios. Our V2V4Real dataset covers a driving area of 410 km, comprising 20K LiDAR frames, 40K RGB frames, 240K annotated 3D bounding boxes for 5 classes, and HDMaps that cover all the driving routes. V2V4Real introduces three perception tasks, including cooperative 3D object detection, cooperative 3D object tracking, and Sim2Real domain adaptation for cooperative perception. We provide comprehensive benchmarks of recent cooperative perception algorithms on three tasks. The V2V4Real dataset and codebase can be found at https://github.com/ucla-mobility/V2V4Real.

READ FULL TEXT

page 1

page 4

page 12

page 13

page 16

page 17

page 18

page 19

research
04/12/2022

DAIR-V2X: A Large-Scale Dataset for Vehicle-Infrastructure Cooperative 3D Object Detection

Autonomous driving faces great safety challenges for a lack of global pe...
research
10/23/2022

IDD-3D: Indian Driving Dataset for 3D Unstructured Road Scenes

Autonomous driving and assistance systems rely on annotated data from tr...
research
09/16/2021

OPV2V: An Open Benchmark Dataset and Fusion Pipeline for Perception with Vehicle-to-Vehicle Communication

Employing Vehicle-to-Vehicle communication to enhance perception perform...
research
08/21/2020

Towards Autonomous Driving: a Multi-Modal 360^∘ Perception Proposal

In this paper, a multi-modal 360^∘ framework for 3D object detection and...
research
09/12/2023

AmodalSynthDrive: A Synthetic Amodal Perception Dataset for Autonomous Driving

Unlike humans, who can effortlessly estimate the entirety of objects eve...
research
04/13/2022

A9-Dataset: Multi-Sensor Infrastructure-Based Dataset for Mobility Research

Data-intensive machine learning based techniques increasingly play a pro...
research
08/02/2020

IoT System for Real-Time Near-Crash Detection for Automated Vehicle Testing

Our world is moving towards the goal of fully autonomous driving at a fa...

Please sign up or login with your details

Forgot password? Click here to reset