4D-Net for Learned Multi-Modal Alignment

09/02/2021
by   AJ Piergiovanni, et al.
7

We present 4D-Net, a 3D object detection approach, which utilizes 3D Point Cloud and RGB sensing information, both in time. We are able to incorporate the 4D information by performing a novel dynamic connection learning across various feature representations and levels of abstraction, as well as by observing geometric constraints. Our approach outperforms the state-of-the-art and strong baselines on the Waymo Open Dataset. 4D-Net is better able to use motion cues and dense image information to detect distant objects more successfully.

READ FULL TEXT

page 3

page 6

page 13

page 14

page 15

research
10/18/2022

Homogeneous Multi-modal Feature Fusion and Interaction for 3D Object Detection

Multi-modal 3D object detection has been an active research topic in aut...
research
03/07/2023

LoGoNet: Towards Accurate 3D Object Detection with Local-to-Global Cross-Modal Fusion

LiDAR-camera fusion methods have shown impressive performance in 3D obje...
research
04/25/2021

Temp-Frustum Net: 3D Object Detection with Temporal Fusion

3D object detection is a core component of automated driving systems. St...
research
04/28/2021

Learning Synergistic Attention for Light Field Salient Object Detection

We propose a novel Synergistic Attention Network (SA-Net) to address the...
research
12/21/2021

EPNet++: Cascade Bi-directional Fusion for Multi-Modal 3D Object Detection

Recently, fusing the LiDAR point cloud and camera image to improve the p...
research
01/26/2019

Points2Pix: 3D Point-Cloud to Image Translation using conditional Generative Adversarial Networks

We present the first approach for 3D point-cloud to image translation ba...
research
09/08/2021

GTT-Net: Learned Generalized Trajectory Triangulation

We present GTT-Net, a supervised learning framework for the reconstructi...

Please sign up or login with your details

Forgot password? Click here to reset