Interactive Multi-scale Fusion of 2D and 3D Features for Multi-object Tracking

by   Guangming Wang, et al.

Multiple object tracking (MOT) is a significant task in achieving autonomous driving. Traditional works attempt to complete this task, either based on point clouds (PC) collected by LiDAR, or based on images captured from cameras. However, relying on one single sensor is not robust enough, because it might fail during the tracking process. On the other hand, feature fusion from multiple modalities contributes to the improvement of accuracy. As a result, new techniques based on different sensors integrating features from multiple modalities are being developed. Texture information from RGB cameras and 3D structure information from Lidar have respective advantages under different circumstances. However, it's not easy to achieve effective feature fusion because of completely distinct information modalities. Previous fusion methods usually fuse the top-level features after the backbones extract the features from different modalities. In this paper, we first introduce PointNet++ to obtain multi-scale deep representations of point cloud to make it adaptive to our proposed Interactive Feature Fusion between multi-scale features of images and point clouds. Specifically, through multi-scale interactive query and fusion between pixel-level and point-level features, our method, can obtain more distinguishing features to improve the performance of multiple object tracking. Besides, we explore the effectiveness of pre-training on each single modality and fine-tuning on the fusion-based model. The experimental results demonstrate that our method can achieve good performance on the KITTI benchmark and outperform other approaches without using multi-scale feature fusion. Moreover, the ablation studies indicates the effectiveness of multi-scale feature fusion and pre-training on single modality.


page 1

page 4

page 5

page 7

page 8

page 9


FGFusion: Fine-Grained Lidar-Camera Fusion for 3D Object Detection

Lidars and cameras are critical sensors that provide complementary infor...

Segment-based fusion of multi-sensor multi-scale satellite soil moisture retrievals

Synergetic use of sensors for soil moisture retrieval is attracting cons...

Multi-scale Geometric Summaries for Similarity-based Sensor Fusion

In this work, we address fusion of heterogeneous sensor data using wavel...

2DPASS: 2D Priors Assisted Semantic Segmentation on LiDAR Point Clouds

As camera and LiDAR sensors capture complementary information used in au...

AdaFusion: Visual-LiDAR Fusion with Adaptive Weights for Place Recognition

Recent years have witnessed the increasing application of place recognit...

From One to Many: Dynamic Cross Attention Networks for LiDAR and Camera Fusion

LiDAR and cameras are two complementary sensors for 3D perception in aut...

LAPTNet-FPN: Multi-scale LiDAR-aided Projective Transform Network for Real Time Semantic Grid Prediction

Semantic grids can be useful representations of the scene around an auto...

Please sign up or login with your details

Forgot password? Click here to reset