Discovering Spatial Relationships by Transformers for Domain Generalization

08/23/2021
by   Cuicui Kang, et al.
0

Due to the rapid increase in the diversity of image data, the problem of domain generalization has received increased attention recently. While domain generalization is a challenging problem, it has achieved great development thanks to the fast development of AI techniques in computer vision. Most of these advanced algorithms are proposed with deep architectures based on convolution neural nets (CNN). However, though CNNs have a strong ability to find the discriminative features, they do a poor job of modeling the relations between different locations in the image due to the response to CNN filters are mostly local. Since these local and global spatial relationships are characterized to distinguish an object under consideration, they play a critical role in improving the generalization ability against the domain gap. In order to get the object parts relationships to gain better domain generalization, this work proposes to use the self attention model. However, the attention models are proposed for sequence, which are not expert in discriminate feature extraction for 2D images. Considering this, we proposed a hybrid architecture to discover the spatial relationships between these local features, and derive a composite representation that encodes both the discriminative features and their relationships to improve the domain generalization. Evaluation on three well-known benchmarks demonstrates the benefits of modeling relationships between the features of an image using the proposed method and achieves state-of-the-art domain generalization performance. More specifically, the proposed algorithm outperforms the state-of-the-art by 2.2% and 3.4% on PACS and Office-Home databases, respectively.

READ FULL TEXT

page 1

page 7

research
05/17/2023

A survey of the Vision Transformers and its CNN-Transformer based Variants

Vision transformers have recently become popular as a possible alternati...
research
10/03/2022

Dual-former: Hybrid Self-attention Transformer for Efficient Image Restoration

Recently, image restoration transformers have achieved comparable perfor...
research
03/09/2023

GPGait: Generalized Pose-based Gait Recognition

Recent works on pose-based gait recognition have demonstrated the potent...
research
10/13/2021

Learning Meta Pattern for Face Anti-Spoofing

Face Anti-Spoofing (FAS) is essential to secure face recognition systems...
research
03/15/2022

Rich CNN-Transformer Feature Aggregation Networks for Super-Resolution

Recent vision transformers along with self-attention have achieved promi...
research
08/30/2022

MRL: Learning to Mix with Attention and Convolutions

In this paper, we present a new neural architectural block for the visio...
research
09/04/2021

On the Out-of-distribution Generalization of Probabilistic Image Modelling

Out-of-distribution (OOD) detection and lossless compression constitute ...

Please sign up or login with your details

Forgot password? Click here to reset