Descriptor learning for omnidirectional image matching

12/29/2011
by   Jonathan Masci, et al.
0

Feature matching in omnidirectional vision systems is a challenging problem, mainly because complicated optical systems make the theoretical modelling of invariance and construction of invariant feature descriptors hard or even impossible. In this paper, we propose learning invariant descriptors using a training set of similar and dissimilar descriptor pairs. We use the similarity-preserving hashing framework, in which we are trying to map the descriptor data to the Hamming space preserving the descriptor similarity on the training set. A neural network is used to solve the underlying optimization problem. Our approach outperforms not only straightforward descriptor matching, but also state-of-the-art similarity-preserving hashing methods.

READ FULL TEXT

page 6

page 10

page 11

research
06/17/2020

HyNet: Local Descriptor with Hybrid Similarity Measure and Triplet Loss

Recent works show that local descriptor learning benefits from the use o...
research
05/28/2011

Scale-Invariant Local Descriptor for Event Recognition in 1D Sensor Signals

In this paper, we introduce a shape-based, time-scale invariant feature ...
research
12/10/2021

Hyperdimensional Feature Fusion for Out-Of-Distribution Detection

We introduce powerful ideas from Hyperdimensional Computing into the cha...
research
07/17/2020

Online Invariance Selection for Local Feature Descriptors

To be invariant, or not to be invariant: that is the question formulated...
research
08/28/2013

A proposition of a robust system for historical document images indexation

Characterizing noisy or ancient documents is a challenging problem up to...
research
01/15/2013

A Geometric Descriptor for Cell-Division Detection

We describe a method for cell-division detection based on a geometric-dr...

Please sign up or login with your details

Forgot password? Click here to reset