PROBE: Predictive Robust Estimation for Visual-Inertial Navigation

08/01/2017
by   Valentin Peretroukhin, et al.
0

Navigation in unknown, chaotic environments continues to present a significant challenge for the robotics community. Lighting changes, self-similar textures, motion blur, and moving objects are all considerable stumbling blocks for state-of-the-art vision-based navigation algorithms. In this paper we present a novel technique for improving localization accuracy within a visual-inertial navigation system (VINS). We make use of training data to learn a model for the quality of visual features with respect to localization error in a given environment. This model maps each visual observation from a predefined prediction space of visual-inertial predictors onto a scalar weight, which is then used to scale the observation covariance matrix. In this way, our model can adjust the influence of each observation according to its quality. We discuss our choice of predictors and report substantial reductions in localization error on 4 km of data from the KITTI dataset, as well as on experimental datasets consisting of 700 m of indoor and outdoor driving on a small ground rover equipped with a Skybotix VI-Sensor.

READ FULL TEXT

page 1

page 4

page 5

page 6

research
03/06/2019

RINS-W: Robust Inertial Navigation System on Wheels

This paper proposes a real-time approach for long-term inertial navigati...
research
09/16/2022

VINet: Visual and Inertial-based Terrain Classification and Adaptive Navigation over Unknown Terrain

We present a visual and inertial-based terrain classification network (V...
research
08/01/2017

PROBE-GK: Predictive Robust Estimation using Generalized Kernels

Many algorithms in computer vision and robotics make strong assumptions ...
research
03/29/2022

Neural Inertial Localization

This paper proposes the inertial localization problem, the task of estim...
research
11/26/2022

DynaVIG: Monocular Vision/INS/GNSS Integrated Navigation and Object Tracking for AGV in Dynamic Scenes

Visual-Inertial Odometry (VIO) usually suffers from drifting over long-t...
research
09/01/2020

Multimodal Aggregation Approach for Memory Vision-Voice Indoor Navigation with Meta-Learning

Vision and voice are two vital keys for agents' interaction and learning...
research
06/29/2021

Towards Generalisable Deep Inertial Tracking via Geometry-Aware Learning

Autonomous navigation in uninstrumented and unprepared environments is a...

Please sign up or login with your details

Forgot password? Click here to reset