A case for using rotation invariant features in state of the art feature matchers

04/21/2022
by   Georg Bökman, et al.
0

The aim of this paper is to demonstrate that a state of the art feature matcher (LoFTR) can be made more robust to rotations by simply replacing the backbone CNN with a steerable CNN which is equivariant to translations and image rotations. It is experimentally shown that this boost is obtained without reducing performance on ordinary illumination and viewpoint matching sequences.

READ FULL TEXT

page 1

page 5

research
10/03/2018

Performance Evaluation of SIFT Descriptor against Common Image Deformations on Iban Plaited Mat Motifs

Borneo indigenous communities are blessed with rich craft heritage. One ...
research
03/15/2021

RoRD: Rotation-Robust Descriptors and Orthographic Views for Local Feature Matching

The use of local detectors and descriptors in typical computer vision pi...
research
12/14/2016

Harmonic Networks: Deep Translation and Rotation Equivariance

Translating or rotating an input image should not affect the results of ...
research
12/05/2022

R2FD2: Fast and Robust Matching of Multimodal Remote Sensing Image via Repeatable Feature Detector and Rotation-invariant Feature Descriptor

Automatically identifying feature correspondences between multimodal ima...
research
03/10/2022

ReF – Rotation Equivariant Features for Local Feature Matching

Sparse local feature matching is pivotal for many computer vision and ro...
research
04/20/2021

Perceptual Loss for Robust Unsupervised Homography Estimation

Homography estimation is often an indispensable step in many computer vi...
research
09/29/2022

In Search of Projectively Equivariant Neural Networks

Equivariance of linear neural network layers is well studied. In this wo...

Please sign up or login with your details

Forgot password? Click here to reset