EPNet: Enhancing Point Features with Image Semantics for 3D Object Detection

by   Tengteng Huang, et al.

In this paper, we aim at addressing two critical issues in the 3D detection task, including the exploitation of multiple sensors (namely LiDAR point cloud and camera image), as well as the inconsistency between the localization and classification confidence. To this end, we propose a novel fusion module to enhance the point features with semantic image features in a point-wise manner without any image annotations. Besides, a consistency enforcing loss is employed to explicitly encourage the consistency of both the localization and classification confidence. We design an end-to-end learnable framework named EPNet to integrate these two components. Extensive experiments on the KITTI and SUN-RGBD datasets demonstrate the superiority of EPNet over the state-of-the-art methods. Codes and models are available at: <https://github.com/happinesslz/EPNet>.


page 2

page 18

page 19

page 20

page 21


EPNet++: Cascade Bi-directional Fusion for Multi-Modal 3D Object Detection

Recently, fusing the LiDAR point cloud and camera image to improve the p...

Deep Continuous Fusion for Multi-Sensor 3D Object Detection

In this paper, we propose a novel 3D object detector that can exploit bo...

Multi-View Adaptive Fusion Network for 3D Object Detection

3D object detection based on LiDAR-camera fusion is becoming an emerging...

CIA-SSD: Confident IoU-Aware Single-Stage Object Detector From Point Cloud

Existing single-stage detectors for locating objects in point clouds oft...

Voxel Field Fusion for 3D Object Detection

In this work, we present a conceptually simple yet effective framework f...

Paint and Distill: Boosting 3D Object Detection with Semantic Passing Network

3D object detection task from lidar or camera sensors is essential for a...

Semantic-Aligned Matching for Enhanced DETR Convergence and Multi-Scale Feature Fusion

The recently proposed DEtection TRansformer (DETR) has established a ful...