ResMatch: Residual Attention Learning for Local Feature Matching

07/11/2023
by   Yuxin Deng, et al.
0

Attention-based graph neural networks have made great progress in feature matching learning. However, insight of how attention mechanism works for feature matching is lacked in the literature. In this paper, we rethink cross- and self-attention from the viewpoint of traditional feature matching and filtering. In order to facilitate the learning of matching and filtering, we inject the similarity of descriptors and relative positions into cross- and self-attention score, respectively. In this way, the attention can focus on learning residual matching and filtering functions with reference to the basic functions of measuring visual and spatial correlation. Moreover, we mine intra- and inter-neighbors according to the similarity of descriptors and relative positions. Then sparse attention for each point can be performed only within its neighborhoods to acquire higher computation efficiency. Feature matching networks equipped with our full and sparse residual attention learning strategies are termed ResMatch and sResMatch respectively. Extensive experiments, including feature matching, pose estimation and visual localization, confirm the superiority of our networks.

READ FULL TEXT

page 1

page 4

page 5

research
09/01/2021

Joint Graph Learning and Matching for Semantic Feature Correspondence

In recent years, powered by the learned discriminative representation vi...
research
03/17/2022

MatchFormer: Interleaving Attention in Transformers for Feature Matching

Local feature matching is a computationally intensive task at the subpix...
research
03/02/2023

ParaFormer: Parallel Attention Transformer for Efficient Feature Matching

Heavy computation is a bottleneck limiting deep-learningbased feature ma...
research
02/09/2023

Drawing Attention to Detail: Pose Alignment through Self-Attention for Fine-Grained Object Classification

Intra-class variations in the open world lead to various challenges in c...
research
07/29/2019

Interlaced Sparse Self-Attention for Semantic Segmentation

In this paper, we present a so-called interlaced sparse self-attention a...
research
08/06/2023

Multi-scale Alternated Attention Transformer for Generalized Stereo Matching

Recent stereo matching networks achieves dramatic performance by introdu...
research
11/26/2018

Matching Features without Descriptors: Implicitly Matched Interest Points (IMIPs)

The extraction and matching of interest points is a prerequisite for vis...

Please sign up or login with your details

Forgot password? Click here to reset