Deep Continuous Fusion for Multi-Sensor 3D Object Detection

by   Ming Liang, et al.

In this paper, we propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization. Towards this goal, we design an end-to-end learnable architecture that exploits continuous convolutions to fuse image and LIDAR feature maps at different levels of resolution. Our proposed continuous fusion layer encode both discrete-state image features as well as continuous geometric information. This enables us to design a novel, reliable and efficient end-to-end learnable 3D object detector based on multiple sensors. Our experimental evaluation on both KITTI as well as a large scale 3D object detection benchmark shows significant improvements over the state of the art.


page 4

page 13

page 14


Multi-Task Multi-Sensor Fusion for 3D Object Detection

In this paper we propose to exploit multiple related tasks for accurate ...

End-to-end Learning of Multi-sensor 3D Tracking by Detection

In this paper we propose a novel approach to tracking by detection that ...

EPNet: Enhancing Point Features with Image Semantics for 3D Object Detection

In this paper, we aim at addressing two critical issues in the 3D detect...

HDNET: Exploiting HD Maps for 3D Object Detection

In this paper we show that High-Definition (HD) maps provide strong prio...

Range Conditioned Dilated Convolutions for Scale Invariant 3D Object Detection

This paper presents a novel 3D object detection framework that processes...

GeoGraph: Learning graph-based multi-view object detection with geometric cues end-to-end

In this paper we propose an end-to-end learnable approach that detects s...

PI-RCNN: An Efficient Multi-sensor 3D Object Detector with Point-based Attentive Cont-conv Fusion Module

LIDAR point clouds and RGB-images are both extremely essential for 3D ob...

Code Repositories