Fisher Information Field: an Efficient and Differentiable Map for Perception-aware Planning

08/07/2020
by   Zichao Zhang, et al.
0

Considering visual localization accuracy at the planning time gives preference to robot motion that can be better localized and thus has the potential of improving vision-based navigation, especially in visually degraded environments. To integrate the knowledge about localization accuracy in motion planning algorithms, a central task is to quantify the amount of information that an image taken at a 6 degree-of-freedom pose brings for localization, which is often represented by the Fisher information. However, computing the Fisher information from a set of sparse landmarks (i.e., a point cloud), which is the most common map for visual localization, is inefficient. This approach scales linearly with the number of landmarks in the environment and does not allow the reuse of the computed Fisher information. To overcome these drawbacks, we propose the first dedicated map representation for evaluating the Fisher information of 6 degree-of-freedom visual localization for perception-aware motion planning. By formulating the Fisher information and sensor visibility carefully, we are able to separate the rotational invariant component from the Fisher information and store it in a voxel grid, namely the Fisher information field. This step only needs to be performed once for a known environment. The Fisher information for arbitrary poses can then be computed from the field in constant time, eliminating the need of costly iterating all the 3D landmarks at the planning time. Experimental results show that the proposed Fisher information field can be applied to different motion planning algorithms and is at least one order-of-magnitude faster than using the point cloud directly. Moreover,the proposed map representation is differentiable, resulting in better performance than the point cloud when used in trajectory optimization algorithms.

READ FULL TEXT

page 1

page 12

page 15

research
11/22/2017

3D Point Cloud Classification and Segmentation using 3D Modified Fisher Vector Representation for Convolutional Neural Networks

The point cloud is gaining prominence as a method for representing 3D sh...
research
11/12/2022

Active View Planning for Visual SLAM in Outdoor Environments Based on Continuous Information Modeling

The visual simultaneous localization and mapping(vSLAM) is widely used i...
research
02/22/2016

A Motion Planning Strategy for the Active Vision-Based Mapping of Ground-Level Structures

This paper presents a strategy to guide a mobile ground robot equipped w...
research
10/06/2022

Active Localization using Bernstein Distribution Functions

In this work, we present a framework that enables a vehicle to autonomou...
research
01/17/2021

Deep Multi-Task Learning for Joint Localization, Perception, and Prediction

Over the last few years, we have witnessed tremendous progress on many s...
research
12/14/2019

Perception-aware Autonomous Mast Motion Planning for Planetary Exploration Rovers

Highly accurate real-time localization is of fundamental importance for ...
research
06/12/2019

Adaptive Navigation Scheme for Optimal Deep-Sea Localization Using Multimodal Perception Cues

Underwater robot interventions require a high level of safety and reliab...

Please sign up or login with your details

Forgot password? Click here to reset