V2VNet: Vehicle-to-Vehicle Communication for Joint Perception and Prediction

by   Tsun-Hsuan Wang, et al.

In this paper, we explore the use of vehicle-to-vehicle (V2V) communication to improve the perception and motion forecasting performance of self-driving vehicles. By intelligently aggregating the information received from multiple nearby vehicles, we can observe the same scene from different viewpoints. This allows us to see through occlusions and detect actors at long range, where the observations are very sparse or non-existent. We also show that our approach of sending compressed deep feature map activations achieves high accuracy while satisfying communication bandwidth requirements.


Learning for Vehicle-to-Vehicle Cooperative Perception under Lossy Communication

Deep learning has been widely used in the perception (e.g., 3D object de...

Map Container: A Map-based Framework for Cooperative Perception

The idea of cooperative perception is to benefit from shared perception ...

Keypoints-Based Deep Feature Fusion for Cooperative Vehicle Detection of Autonomous Driving

Sharing collective perception messages (CPM) between vehicles is investi...

OPV2V: An Open Benchmark Dataset and Fusion Pipeline for Perception with Vehicle-to-Vehicle Communication

Employing Vehicle-to-Vehicle communication to enhance perception perform...

Argoverse: 3D Tracking and Forecasting with Rich Maps

We present Argoverse – two datasets designed to support autonomous vehic...

Estimating Uncertainty of Autonomous Vehicle Systems with Generalized Polynomial Chaos

Modern autonomous vehicle systems use complex perception and control com...

Please sign up or login with your details

Forgot password? Click here to reset