Exploring Set Similarity for Dense Self-supervised Representation Learning

07/19/2021
by   Zhaoqing Wang, et al.
3

By considering the spatial correspondence, dense self-supervised representation learning has achieved superior performance on various dense prediction tasks. However, the pixel-level correspondence tends to be noisy because of many similar misleading pixels, e.g., backgrounds. To address this issue, in this paper, we propose to explore set similarity (SetSim) for dense self-supervised representation learning. We generalize pixel-wise similarity learning to set-wise one to improve the robustness because sets contain more semantic and structure information. Specifically, by resorting to attentional features of views, we establish corresponding sets, thus filtering out noisy backgrounds that may cause incorrect correspondences. Meanwhile, these attentional features can keep the coherence of the same image across different views to alleviate semantic inconsistency. We further search the cross-view nearest neighbours of sets and employ the structured neighbourhood information to enhance the robustness. Empirical evaluations demonstrate that SetSim is superior to state-of-the-art methods on object detection, keypoint detection, instance segmentation, and semantic segmentation.

READ FULL TEXT

page 2

page 9

page 13

page 14

research
11/18/2020

Dense Contrastive Learning for Self-Supervised Visual Pre-Training

To date, most existing self-supervised learning methods are designed and...
research
06/02/2022

Siamese Image Modeling for Self-Supervised Vision Representation Learning

Self-supervised learning (SSL) has delivered superior performance on a v...
research
09/16/2021

Dense Semantic Contrast for Self-Supervised Visual Representation Learning

Self-supervised representation learning for visual pre-training has achi...
research
03/28/2022

Learning Where to Learn in Cross-View Self-Supervised Learning

Self-supervised learning (SSL) has made enormous progress and largely na...
research
07/09/2022

A Study on Self-Supervised Object Detection Pretraining

In this work, we study different approaches to self-supervised pretraini...
research
06/07/2022

Spatial Cross-Attention Improves Self-Supervised Visual Representation Learning

Unsupervised representation learning methods like SwAV are proved to be ...
research
08/06/2023

Learning Fine-Grained Features for Pixel-wise Video Correspondences

Video analysis tasks rely heavily on identifying the pixels from differe...

Please sign up or login with your details

Forgot password? Click here to reset