BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation

05/26/2022
by   Zhijian Liu, et al.
21

Multi-sensor fusion is essential for an accurate and reliable autonomous driving system. Recent approaches are based on point-level fusion: augmenting the LiDAR point cloud with camera features. However, the camera-to-LiDAR projection throws away the semantic density of camera features, hindering the effectiveness of such methods, especially for semantic-oriented tasks (such as 3D scene segmentation). In this paper, we break this deeply-rooted convention with BEVFusion, an efficient and generic multi-task multi-sensor fusion framework. It unifies multi-modal features in the shared bird's-eye view (BEV) representation space, which nicely preserves both geometric and semantic information. To achieve this, we diagnose and lift key efficiency bottlenecks in the view transformation with optimized BEV pooling, reducing latency by more than 40x. BEVFusion is fundamentally task-agnostic and seamlessly supports different 3D perception tasks with almost no architectural changes. It establishes the new state of the art on nuScenes, achieving 1.3 NDS on 3D object detection and 13.6 1.9x lower computation cost.

READ FULL TEXT

page 1

page 4

page 5

page 6

page 8

page 9

page 10

page 11

research
12/09/2022

SemanticBEVFusion: Rethink LiDAR-Camera Fusion in Unified Bird's-Eye View Representation for 3D Object Detection

LiDAR and camera are two essential sensors for 3D object detection in au...
research
08/15/2023

UniTR: A Unified and Efficient Multi-Modal Transformer for Bird's-Eye-View Representation

Jointly processing information from multiple sensors is crucial to achie...
research
09/16/2023

Multi-camera Bird's Eye View Perception for Autonomous Driving

Most automated driving systems comprise a diverse sensor set, including ...
research
03/21/2019

Short-Term Prediction and Multi-Camera Fusion on Semantic Grids

An environment representation (ER) is a substantial part of every autono...
research
03/30/2023

Understanding the Robustness of 3D Object Detection with Bird's-Eye-View Representations in Autonomous Driving

3D object detection is an essential perception task in autonomous drivin...
research
07/18/2022

UniFormer: Unified Multi-view Fusion Transformer for Spatial-Temporal Representation in Bird's-Eye-View

Bird's eye view (BEV) representation is a new perception formulation for...
research
09/07/2022

MSMDFusion: Fusing LiDAR and Camera at Multiple Scales with Multi-Depth Seeds for 3D Object Detection

Fusing LiDAR and camera information is essential for achieving accurate ...

Please sign up or login with your details

Forgot password? Click here to reset