Interactive Multi-scale Fusion of 2D and 3D Features for Multi-object Tracking

03/30/2022
by   Guangming Wang, et al.
0

Multiple object tracking (MOT) is a significant task in achieving autonomous driving. Traditional works attempt to complete this task, either based on point clouds (PC) collected by LiDAR, or based on images captured from cameras. However, relying on one single sensor is not robust enough, because it might fail during the tracking process. On the other hand, feature fusion from multiple modalities contributes to the improvement of accuracy. As a result, new techniques based on different sensors integrating features from multiple modalities are being developed. Texture information from RGB cameras and 3D structure information from Lidar have respective advantages under different circumstances. However, it's not easy to achieve effective feature fusion because of completely distinct information modalities. Previous fusion methods usually fuse the top-level features after the backbones extract the features from different modalities. In this paper, we first introduce PointNet++ to obtain multi-scale deep representations of point cloud to make it adaptive to our proposed Interactive Feature Fusion between multi-scale features of images and point clouds. Specifically, through multi-scale interactive query and fusion between pixel-level and point-level features, our method, can obtain more distinguishing features to improve the performance of multiple object tracking. Besides, we explore the effectiveness of pre-training on each single modality and fine-tuning on the fusion-based model. The experimental results demonstrate that our method can achieve good performance on the KITTI benchmark and outperform other approaches without using multi-scale feature fusion. Moreover, the ablation studies indicates the effectiveness of multi-scale feature fusion and pre-training on single modality.

READ FULL TEXT

page 1

page 4

page 5

page 7

page 8

page 9

research
09/21/2023

FGFusion: Fine-Grained Lidar-Camera Fusion for 3D Object Detection

Lidars and cameras are critical sensors that provide complementary infor...
research
11/29/2022

Segment-based fusion of multi-sensor multi-scale satellite soil moisture retrievals

Synergetic use of sensors for soil moisture retrieval is attracting cons...
research
10/13/2018

Multi-scale Geometric Summaries for Similarity-based Sensor Fusion

In this work, we address fusion of heterogeneous sensor data using wavel...
research
07/10/2022

2DPASS: 2D Priors Assisted Semantic Segmentation on LiDAR Point Clouds

As camera and LiDAR sensors capture complementary information used in au...
research
11/23/2021

AdaFusion: Visual-LiDAR Fusion with Adaptive Weights for Place Recognition

Recent years have witnessed the increasing application of place recognit...
research
09/25/2022

From One to Many: Dynamic Cross Attention Networks for LiDAR and Camera Fusion

LiDAR and cameras are two complementary sensors for 3D perception in aut...
research
02/10/2023

LAPTNet-FPN: Multi-scale LiDAR-aided Projective Transform Network for Real Time Semantic Grid Prediction

Semantic grids can be useful representations of the scene around an auto...

Please sign up or login with your details

Forgot password? Click here to reset