Learning Matchable Colorspace Transformations for Long-term Metric Visual Localization

by   Lee Clement, et al.

Long-term metric localization is an essential capability of autonomous mobile robots, but remains challenging for vision-based systems in the presence of appearance change caused by lighting, weather or seasonal variations. While experience-based mapping has proven to be an effective technique for enabling visual localization across appearance change, the number of experiences required for reliable long-term localization can be large, and methods for reducing the necessary number of experiences are desired. Taking inspiration from physics-based models of color constancy, we propose a method for learning a nonlinear mapping from RGB to grayscale colorspaces that maximizes the number of feature matches for images captured under varying lighting and weather conditions. Our key insight is that useful image transformations can be learned by approximating conventional non-differentiable localization pipelines with a differentiable learned model that can predict a convenient measure of localization quality, such as the number of feature matches, for a given pair of images. Moreover, we find that the generality of appearance-robust RGB-to-grayscale mappings can be improved by incorporating a learned low-dimensional context feature computed for a specific image pair. Using synthetic and real-world datasets, we show that our method substantially improves feature matching across day-night cycles and presents a viable strategy for significantly improving the efficiency of experience-based visual localization.



There are no comments yet.


page 2

page 4

page 6

page 7


Image Stylization for Robust Features

Local features that are robust to both viewpoint and appearance changes ...

Adversarial Training for Adverse Conditions: Robust Metric Localisation using Appearance Transfer

We present a method of improving visual place recognition and metric loc...

How to Train a CAT: Learning Canonical Appearance Transformations for Direct Visual Localization Under Illumination Change

Direct visual localization has recently enjoyed a resurgence in populari...

Connecting Visual Experiences using Max-flow Network with Application to Visual Localization

We are motivated by the fact that multiple representations of the enviro...

Efficient Condition-based Representations for Long-Term Visual Localization

We propose an approach to localization from images that is designed to e...

Keeping an Eye on Things: Deep Learned Features for Long-Term Visual Localization

In this paper, we learn visual features that we use to first build a map...

2-Entity RANSAC for robust visual localization in changing environment

Visual localization has attracted considerable attention due to its low-...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.