Towards Robust Monocular Visual Odometry for Flying Robots on Planetary Missions

09/12/2021
by   Martin Wudenka, et al.
0

In the future, extraterrestrial expeditions will not only be conducted by rovers but also by flying robots. The technical demonstration drone Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by terrain traversability. Robust self-localization is crucial for that. Cameras that are lightweight, cheap and information-rich sensors are already used to estimate the ego-motion of vehicles. However, methods proven to work in man-made environments cannot simply be deployed on other planets. The highly repetitive textures present in the wastelands of Mars pose a huge challenge to descriptor matching based approaches. In this paper, we present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking to obtain feature correspondences between images and a refined keyframe selection criterion. In contrast to most other approaches, our framework can also handle rotation-only motions that are particularly challenging for monocular odometry systems. Furthermore, we present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix. This way we obtain an implicit measure of uncertainty. We evaluate the validity of our approach on all sequences of a challenging real-world dataset captured in a Mars-like environment and show that it outperforms state-of-the-art approaches.

READ FULL TEXT

page 1

page 3

page 6

page 7

research
09/19/2018

DSVO: Direct Stereo Visual Odometry

This paper proposes a novel approach to stereo visual odometry without s...
research
06/15/2018

Real-time Monocular Visual Odometry for Turbid and Dynamic Underwater Environments

In the context of robotic underwater operations, the visual degradations...
research
09/08/2019

Localization for Ground Robots: On Manifold Representation, Integration, Re-Parameterization, and Optimization

In this paper, we focus on localizing ground robots, by probabilisticall...
research
05/20/2021

DeepAVO: Efficient Pose Refining with Feature Distilling for Deep Visual Odometry

The technology for Visual Odometry (VO) that estimates the position and ...
research
05/10/2023

Transformer-based model for monocular visual odometry: a video understanding approach

Estimating the camera pose given images of a single camera is a traditio...
research
07/30/2021

Instant Visual Odometry Initialization for Mobile AR

Mobile AR applications benefit from fast initialization to display world...
research
04/03/2019

Beyond Tracking: Selecting Memory and Refining Poses for Deep Visual Odometry

Most previous learning-based visual odometry (VO) methods take VO as a p...

Please sign up or login with your details

Forgot password? Click here to reset