Multi-Modal Recurrent Fusion for Indoor Localization

02/19/2022
by   Jianyuan Yu, et al.
0

This paper considers indoor localization using multi-modal wireless signals including Wi-Fi, inertial measurement unit (IMU), and ultra-wideband (UWB). By formulating the localization as a multi-modal sequence regression problem, a multi-stream recurrent fusion method is proposed to combine the current hidden state of each modality in the context of recurrent neural networks while accounting for the modality uncertainty which is directly learned from its own immediate past states. The proposed method was evaluated on the large-scale SPAWC2021 multi-modal localization dataset and compared with a wide range of baseline methods including the trilateration method, traditional fingerprinting methods, and convolution network-based methods.

READ FULL TEXT
research
06/21/2021

Improving Multi-Modal Learning with Uni-Modal Teachers

Learning multi-modal representations is an essential step towards real-w...
research
09/09/2021

Learning Cross-Scale Visual Representations for Real-Time Image Geo-Localization

Robot localization remains a challenging task in GPS denied environments...
research
11/10/2020

Social-STAGE: Spatio-Temporal Multi-Modal Future Trajectory Forecast

This paper considers the problem of multi-modal future trajectory foreca...
research
09/18/2019

Efficient Computation of Multi-Modal Public Transit Traffic Assignments using ULTRA

We study the problem of computing public transit traffic assignments in ...
research
12/29/2020

Detecting Hate Speech in Multi-modal Memes

In the past few years, there has been a surge of interest in multi-modal...
research
05/18/2021

Fusion-DHL: WiFi, IMU, and Floorplan Fusion for Dense History of Locations in Indoor Environments

The paper proposes a multi-modal sensor fusion algorithm that fuses WiFi...

Please sign up or login with your details

Forgot password? Click here to reset