Omnidirectional CNN for Visual Place Recognition and Navigation

03/12/2018
by   Tsun-Hsuan Wang, et al.
0

Visual place recognition is challenging, especially when only a few place exemplars are given. To mitigate the challenge, we consider place recognition method using omnidirectional cameras and propose a novel Omnidirectional Convolutional Neural Network (O-CNN) to handle severe camera pose variation. Given a visual input, the task of the O-CNN is not to retrieve the matched place exemplar, but to retrieve the closest place exemplar and estimate the relative distance between the input and the closest place. With the ability to estimate relative distance, a heuristic policy is proposed to navigate a robot to the retrieved closest place. Note that the network is designed to take advantage of the omnidirectional view by incorporating circular padding and rotation invariance. To train a powerful O-CNN, we build a virtual world for training on a large scale. We also propose a continuous lifted structured feature embedding loss to learn the concept of distance efficiently. Finally, our experimental results confirm that our method achieves state-of-the-art accuracy and speed with both the virtual world and real-world datasets.

READ FULL TEXT

page 3

page 4

research
01/17/2015

On the Performance of ConvNet Features for Place Recognition

After the incredible success of deep learning in the computer vision dom...
research
09/30/2021

Forming a sparse representation for visual place recognition using a neurorobotic approach

This paper introduces a novel unsupervised neural network model for visu...
research
09/17/2019

Spatio-Semantic ConvNet-Based Visual Place Recognition

We present a Visual Place Recognition system that follows the two-stage ...
research
05/08/2018

Visual Global Localization with a Hybrid WNN-CNN Approach

Currently, self-driving cars rely greatly on the Global Positioning Syst...
research
05/11/2019

Self-Supervised Visual Place Recognition Learning in Mobile Robots

Place recognition is a critical component in robot navigation that enabl...
research
04/22/2022

Transferring ConvNet Features from Passive to Active Robot Self-Localization: The Use of Ego-Centric and World-Centric Views

The training of a next-best-view (NBV) planner for visual place recognit...
research
03/03/2022

STUN: Self-Teaching Uncertainty Estimation for Place Recognition

Place recognition is key to Simultaneous Localization and Mapping (SLAM)...

Please sign up or login with your details

Forgot password? Click here to reset