Multi-Modal Loop Closing in Unstructured Planetary Environments with Visually Enriched Submaps

05/05/2021
by   Riccardo Giubilato, et al.
0

Future planetary missions will rely on rovers that can autonomously explore and navigate in unstructured environments. An essential element is the ability to recognize places that were already visited or mapped. In this work we leverage the ability of stereo cameras to provide both visual and depth information, guiding the search and validation of loop closures from a multi-modal perspective. We propose to augment submaps that are created by aggregating stereo point clouds, with visual keyframes. Point clouds matches are found by comparing CSHOT descriptors and validated by clustering while visual matches are established by comparing keyframes using Bag-of-Words (BoW) and ORB descriptors. The relative transformations resulting from both keyframe and point cloud matches are then fused to provide pose constraints between submaps in our graph-based SLAM framework. Using the LRU rover, we performed several tests in both an indoor laboratory environment as well as a challenging planetary analog environment on Mount Etna, Italy. These environments consist of areas where either keyframes or point clouds alone fail to provide adequate matches, thus demonstrating the benefit of the proposed multi-modal approach.

READ FULL TEXT

page 1

page 6

research
09/16/2019

Place Recognition for Stereo VisualOdometry using LiDAR descriptors

Place recognition is a core component in SLAM, and in most visual SLAM s...
research
09/01/2020

Gaussian Process Gradient Maps for Loop-Closure Detection in Unstructured Planetary Environments

The ability to recognize previously mapped locations is an essential fea...
research
09/01/2022

MM-PCQA: Multi-Modal Learning for No-reference Point Cloud Quality Assessment

The visual quality of point clouds has been greatly emphasized since the...
research
03/18/2022

Sparse Fuse Dense: Towards High Quality 3D Detection with Depth Completion

Current LiDAR-only 3D detection methods inevitably suffer from the spars...
research
05/10/2023

A Multi-modal Garden Dataset and Hybrid 3D Dense Reconstruction Framework Based on Panoramic Stereo Images for a Trimming Robot

Recovering an outdoor environment's surface mesh is vital for an agricul...
research
02/22/2018

Multi-Sensor Integration for Indoor 3D Reconstruction

Outdoor maps and navigation information delivered by modern services and...
research
03/17/2023

PersonalTailor: Personalizing 2D Pattern Design from 3D Garment Point Clouds

Garment pattern design aims to convert a 3D garment to the corresponding...

Please sign up or login with your details

Forgot password? Click here to reset