RoRD: Rotation-Robust Descriptors and Orthographic Views for Local Feature Matching

03/15/2021
by   Udit Singh Parihar, et al.
20

The use of local detectors and descriptors in typical computer vision pipelines work well until variations in viewpoint and appearance change become extreme. Past research in this area has typically focused on one of two approaches to this challenge: the use of projections into spaces more suitable for feature matching under extreme viewpoint changes, and attempting to learn features that are inherently more robust to viewpoint change. In this paper, we present a novel framework that combines learning of invariant descriptors through data augmentation and orthographic viewpoint projection. We propose rotation-robust local descriptors, learnt through training data augmentation based on rotation homographies, and a correspondence ensemble technique that combines vanilla feature correspondences with those obtained through rotation-robust features. Using a range of benchmark datasets as well as contributing a new bespoke dataset for this research domain, we evaluate the effectiveness of the proposed approach on key tasks including pose estimation and visual place recognition. Our system outperforms a range of baseline and state-of-the-art techniques, including enabling higher levels of place recognition precision across opposing place viewpoints and achieves practically-useful performance levels even under extreme viewpoint changes.

READ FULL TEXT

page 1

page 2

page 3

page 6

page 7

page 8

research
03/10/2022

ReF – Rotation Equivariant Features for Local Feature Matching

Sparse local feature matching is pivotal for many computer vision and ro...
research
10/03/2020

Early Bird: Loop Closures from Opposing Viewpoints for Perceptually-Aliased Indoor Environments

Significant advances have been made recently in Visual Place Recognition...
research
08/01/2019

Visual Place Recognition for Aerial Robotics: Exploring Accuracy-Computation Trade-off for Local Image Descriptors

Visual Place Recognition (VPR) is a fundamental yet challenging task for...
research
04/21/2022

A case for using rotation invariant features in state of the art feature matchers

The aim of this paper is to demonstrate that a state of the art feature ...
research
02/20/2019

Look No Deeper: Recognizing Places from Opposing Viewpoints under Varying Scene Appearance using Single-View Depth Estimation

Visual place recognition (VPR) - the act of recognizing a familiar visua...
research
07/17/2020

Online Invariance Selection for Local Feature Descriptors

To be invariant, or not to be invariant: that is the question formulated...
research
09/22/2022

DRKF: Distilled Rotated Kernel Fusion for Efficiently Boosting Rotation Invariance in Image Matching

Most existing learning-based image matching pipelines are designed for b...

Please sign up or login with your details

Forgot password? Click here to reset