DASC: Robust Dense Descriptor for Multi-modal and Multi-spectral Correspondence Estimation

04/27/2016
by   Seungryong Kim, et al.
0

Establishing dense correspondences between multiple images is a fundamental task in many applications. However, finding a reliable correspondence in multi-modal or multi-spectral images still remains unsolved due to their challenging photometric and geometric variations. In this paper, we propose a novel dense descriptor, called dense adaptive self-correlation (DASC), to estimate multi-modal and multi-spectral dense correspondences. Based on an observation that self-similarity existing within images is robust to imaging modality variations, we define the descriptor with a series of an adaptive self-correlation similarity measure between patches sampled by a randomized receptive field pooling, in which a sampling pattern is obtained using a discriminative learning. The computational redundancy of dense descriptors is dramatically reduced by applying fast edge-aware filtering. Furthermore, in order to address geometric variations including scale and rotation, we propose a geometry-invariant DASC (GI-DASC) descriptor that effectively leverages the DASC through a superpixel-based representation. For a quantitative evaluation of the GI-DASC, we build a novel multi-modal benchmark as varying photometric and geometric conditions. Experimental results demonstrate the outstanding performance of the DASC and GI-DASC in many cases of multi-modal and multi-spectral dense correspondences.

READ FULL TEXT

page 1

page 4

page 8

page 10

page 11

page 12

page 13

page 14

research
03/21/2016

Deep Self-Convolutional Activations Descriptor for Dense Cross-Modal Correspondence

We present a novel descriptor, called deep self-convolutional activation...
research
10/09/2020

MMGSD: Multi-Modal Gaussian Shape Descriptors for Correspondence Matching in 1D and 2D Deformable Objects

We explore learning pixelwise correspondences between images of deformab...
research
03/25/2023

Learning Rotation-Equivariant Features for Visual Correspondence

Extracting discriminative local features that are invariant to imaging v...
research
01/16/2018

Deep Multi-Spectral Registration Using Invariant Descriptor Learning

In this paper, we introduce a novel deep-learning method to align cross-...
research
01/03/2019

Local Area Transform for Cross-Modality Correspondence Matching and Deep Scene Recognition

Establishing correspondences is a fundamental task in variety of image p...
research
02/03/2017

FCSS: Fully Convolutional Self-Similarity for Dense Semantic Correspondence

We present a descriptor, called fully convolutional self-similarity (FCS...
research
11/18/2015

Dense Human Body Correspondences Using Convolutional Networks

We propose a deep learning approach for finding dense correspondences be...

Please sign up or login with your details

Forgot password? Click here to reset