Learning Matchable Colorspace Transformations for Long-term Metric Visual Localization

04/01/2019
by   Lee Clement, et al.
0

Long-term metric localization is an essential capability of autonomous mobile robots, but remains challenging for vision-based systems in the presence of appearance change caused by lighting, weather or seasonal variations. While experience-based mapping has proven to be an effective technique for enabling visual localization across appearance change, the number of experiences required for reliable long-term localization can be large, and methods for reducing the necessary number of experiences are desired. Taking inspiration from physics-based models of color constancy, we propose a method for learning a nonlinear mapping from RGB to grayscale colorspaces that maximizes the number of feature matches for images captured under varying lighting and weather conditions. Our key insight is that useful image transformations can be learned by approximating conventional non-differentiable localization pipelines with a differentiable learned model that can predict a convenient measure of localization quality, such as the number of feature matches, for a given pair of images. Moreover, we find that the generality of appearance-robust RGB-to-grayscale mappings can be improved by incorporating a learned low-dimensional context feature computed for a specific image pair. Using synthetic and real-world datasets, we show that our method substantially improves feature matching across day-night cycles and presents a viable strategy for significantly improving the efficiency of experience-based visual localization.

READ FULL TEXT

page 2

page 4

page 6

page 7

research
06/22/2023

What to Learn: Features, Image Transformations, or Both?

Long-term visual localization is an essential problem in robotics and co...
research
08/16/2020

Image Stylization for Robust Features

Local features that are robust to both viewpoint and appearance changes ...
research
09/09/2017

How to Train a CAT: Learning Canonical Appearance Transformations for Direct Visual Localization Under Illumination Change

Direct visual localization has recently enjoyed a resurgence in populari...
research
08/01/2018

Connecting Visual Experiences using Max-flow Network with Application to Visual Localization

We are motivated by the fact that multiple representations of the enviro...
research
03/09/2018

Adversarial Training for Adverse Conditions: Robust Metric Localisation using Appearance Transfer

We present a method of improving visual place recognition and metric loc...
research
12/10/2018

Efficient Condition-based Representations for Long-Term Visual Localization

We propose an approach to localization from images that is designed to e...
research
09/09/2021

Keeping an Eye on Things: Deep Learned Features for Long-Term Visual Localization

In this paper, we learn visual features that we use to first build a map...

Please sign up or login with your details

Forgot password? Click here to reset