Spatial Cross-Attention Improves Self-Supervised Visual Representation Learning

06/07/2022
by   Mehdi Seyfi, et al.
0

Unsupervised representation learning methods like SwAV are proved to be effective in learning visual semantics of a target dataset. The main idea behind these methods is that different views of a same image represent the same semantics. In this paper, we further introduce an add-on module to facilitate the injection of the knowledge accounting for spatial cross correlations among the samples. This in turn results in distilling intra-class information including feature level locations and cross similarities between same-class instances. The proposed add-on can be added to existing methods such as the SwAV. We can later remove the add-on module for inference without any modification of the learned weights. Through an extensive set of empirical evaluations, we verify that our method yields an improved performance in detecting the class activation maps, top-1 classification accuracy, and down-stream tasks such as object detection, with different configuration settings.

READ FULL TEXT

page 2

page 8

page 17

page 18

research
12/13/2022

Semantics-Consistent Feature Search for Self-Supervised Visual Representation Learning

In contrastive self-supervised learning, the common way to learn discrim...
research
09/13/2022

HistoPerm: A Permutation-Based View Generation Approach for Learning Histopathologic Feature Representations

Recently, deep learning methods have been successfully applied to solve ...
research
03/31/2023

INoD: Injected Noise Discriminator for Self-Supervised Representation Learning in Agricultural Fields

Perception datasets for agriculture are limited both in quantity and div...
research
07/19/2021

Exploring Set Similarity for Dense Self-supervised Representation Learning

By considering the spatial correspondence, dense self-supervised represe...
research
08/25/2023

Self-Supervised Representation Learning with Cross-Context Learning between Global and Hypercolumn Features

Whilst contrastive learning yields powerful representations by matching ...
research
04/04/2022

BatchFormerV2: Exploring Sample Relationships for Dense Representation Learning

Attention mechanisms have been very popular in deep neural networks, whe...
research
12/02/2018

Iterative Reorganization with Weak Spatial Constraints: Solving Arbitrary Jigsaw Puzzles for Unsupervised Representation Learning

Learning visual features from unlabeled image data is an important yet c...

Please sign up or login with your details

Forgot password? Click here to reset