Don't Look Back: Robustifying Place Categorization for Viewpoint- and Condition-Invariant Place Recognition

01/16/2018
by   Sourav Garg, et al.
0

When a human drives a car along a road for the first time, they later recognize where they are on the return journey typically without needing to look in their rear-view mirror or turn around to look back, despite significant viewpoint and appearance change. Such navigation capabilities are typically attributed to our semantic visual understanding of the environment [1] beyond geometry to recognizing the types of places we are passing through such as "passing a shop on the left" or "moving through a forested area". Humans are in effect using place categorization [2] to perform specific place recognition even when the viewpoint is 180 degrees reversed. Recent advances in deep neural networks have enabled high-performance semantic understanding of visual places and scenes, opening up the possibility of emulating what humans do. In this work, we develop a novel methodology for using the semantics-aware higher-order layers of deep neural networks for recognizing specific places from within a reference database. To further improve the robustness to appearance change, we develop a descriptor normalization scheme that builds on the success of normalization schemes for pure appearance-based techniques such as SeqSLAM [3]. Using two different datasets - one road-based, one pedestrian-based, we evaluate the performance of the system in performing place recognition on reverse traversals of a route with a limited field of view camera and no turn-back-and-look behaviours, and compare to existing state-of-the-art techniques and vanilla off-the-shelf features. The results demonstrate significant improvements over the existing state of the art, especially for extreme perceptual challenges that involve both great viewpoint change and environmental appearance change. We also provide experimental analyses of the contributions of the various system components.

READ FULL TEXT

page 2

page 5

page 6

page 7

research
02/20/2019

Look No Deeper: Recognizing Places from Opposing Viewpoints under Varying Scene Appearance using Single-View Depth Estimation

Visual place recognition (VPR) - the act of recognizing a familiar visua...
research
04/16/2018

LoST? Appearance-Invariant Place Recognition for Opposite Viewpoints using Visual Semantics

Human visual scene understanding is so remarkable that we are able to re...
research
01/17/2015

On the Performance of ConvNet Features for Place Recognition

After the incredible success of deep learning in the computer vision dom...
research
06/29/2018

Excavate Condition-invariant Space by Intrinsic Encoder

As the human, we can recognize the places across a wide range of changin...
research
10/03/2020

Early Bird: Loop Closures from Opposing Viewpoints for Perceptually-Aliased Indoor Environments

Significant advances have been made recently in Visual Place Recognition...
research
07/06/2021

A Hierarchical Dual Model of Environment- and Place-Specific Utility for Visual Place Recognition

Visual Place Recognition (VPR) approaches have typically attempted to ma...
research
05/27/2022

Improving Road Segmentation in Challenging Domains Using Similar Place Priors

Road segmentation in challenging domains, such as night, snow or rain, i...

Please sign up or login with your details

Forgot password? Click here to reset