TransFusionOdom: Interpretable Transformer-based LiDAR-Inertial Fusion Odometry Estimation

04/16/2023
by   Leyuan Sun, et al.
0

Multi-modal fusion of sensors is a commonly used approach to enhance the performance of odometry estimation, which is also a fundamental module for mobile robots. However, the question of how to perform fusion among different modalities in a supervised sensor fusion odometry estimation task? is still one of challenging issues remains. Some simple operations, such as element-wise summation and concatenation, are not capable of assigning adaptive attentional weights to incorporate different modalities efficiently, which make it difficult to achieve competitive odometry results. Recently, the Transformer architecture has shown potential for multi-modal fusion tasks, particularly in the domains of vision with language. In this work, we propose an end-to-end supervised Transformer-based LiDAR-Inertial fusion framework (namely TransFusionOdom) for odometry estimation. The multi-attention fusion module demonstrates different fusion approaches for homogeneous and heterogeneous modalities to address the overfitting problem that can arise from blindly increasing the complexity of the model. Additionally, to interpret the learning process of the Transformer-based multi-modal interactions, a general visualization approach is introduced to illustrate the interactions between modalities. Moreover, exhaustive ablation studies evaluate different multi-modal fusion strategies to verify the performance of the proposed fusion strategy. A synthetic multi-modal dataset is made public to validate the generalization ability of the proposed fusion strategy, which also works for other combinations of different modalities. The quantitative and qualitative odometry evaluations on the KITTI dataset verify the proposed TransFusionOdom could achieve superior performance compared with other related works.

READ FULL TEXT

page 1

page 8

page 9

page 12

research
03/22/2022

Multi-Modal Learning for AU Detection Based on Multi-Head Fused Transformers

Multi-modal learning has been intensified in recent years, especially fo...
research
07/15/2020

Learning Multiplicative Interactions with Bayesian Neural Networks for Visual-Inertial Odometry

This paper presents an end-to-end multi-modal learning approach for mono...
research
10/23/2022

Anticipative Feature Fusion Transformer for Multi-Modal Action Anticipation

Although human action anticipation is a task which is inherently multi-m...
research
07/13/2022

Multi-modal Depression Estimation based on Sub-attentional Fusion

Failure to timely diagnose and effectively treat depression leads to ove...
research
06/10/2019

Commuting Conditional GANs for Robust Multi-Modal Fusion

This paper presents a data driven approach to multi-modal fusion, where ...
research
12/30/2019

SelectFusion: A Generic Framework to Selectively Learn Multisensory Fusion

Autonomous vehicles and mobile robotic systems are typically equipped wi...
research
11/21/2022

TFormer: A throughout fusion transformer for multi-modal skin lesion diagnosis

Multi-modal skin lesion diagnosis (MSLD) has achieved remarkable success...

Please sign up or login with your details

Forgot password? Click here to reset