DeepAI AI Chat
Log In Sign Up

Semantic Correspondence with Transformers

by   Seokju Cho, et al.

We propose a novel cost aggregation network, called Cost Aggregation with Transformers (CATs), to find dense correspondences between semantically similar images with additional challenges posed by large intra-class appearance and geometric variations. Compared to previous hand-crafted or CNN-based methods addressing the cost aggregation stage, which either lack robustness to severe deformations or inherit the limitation of CNNs that fail to discriminate incorrect matches due to limited receptive fields, CATs explore global consensus among initial correlation map with the help of some architectural designs that allow us to exploit full potential of self-attention mechanism. Specifically, we include appearance affinity modelling to disambiguate the initial correlation maps and multi-level aggregation to benefit from hierarchical feature representations within Transformer-based aggregator, and combine with swapping self-attention and residual connections not only to enforce consistent matching, but also to ease the learning process. We conduct experiments to demonstrate the effectiveness of the proposed model over the latest methods and provide extensive ablation studies. Code and trained models will be made available at


page 4

page 6

page 8

page 9

page 15

page 16

page 17

page 18


CATs++: Boosting Cost Aggregation with Convolutions and Transformers

Cost aggregation is a highly important process in image matching tasks, ...

Cost Aggregation Is All You Need for Few-Shot Segmentation

We introduce a novel cost aggregation network, dubbed Volumetric Aggrega...

Cost Aggregation with 4D Convolutional Swin Transformer for Few-Shot Segmentation

This paper presents a novel cost aggregation network, called Volumetric ...

Integrative Feature and Cost Aggregation with Transformers for Dense Correspondence

We present a novel architecture for dense correspondence. The current st...

Multi Resolution Analysis (MRA) for Approximate Self-Attention

Transformers have emerged as a preferred model for many tasks in natural...

CATrans: Context and Affinity Transformer for Few-Shot Segmentation

Few-shot segmentation (FSS) aims to segment novel categories given scarc...

Monocular Scene Reconstruction with 3D SDF Transformers

Monocular scene reconstruction from posed images is challenging due to t...

Code Repositories


Official implementation of CATs

view repo