Patch-Level Contrasting without Patch Correspondence for Accurate and Dense Contrastive Representation Learning

06/23/2023
by   Shaofeng Zhang, et al.
0

We propose ADCLR: A ccurate and D ense Contrastive Representation Learning, a novel self-supervised learning framework for learning accurate and dense vision representation. To extract spatial-sensitive information, ADCLR introduces query patches for contrasting in addition with global contrasting. Compared with previous dense contrasting methods, ADCLR mainly enjoys three merits: i) achieving both global-discriminative and spatial-sensitive representation, ii) model-efficient (no extra parameters in addition to the global contrasting baseline), and iii) correspondence-free and thus simpler to implement. Our approach achieves new state-of-the-art performance for contrastive methods. On classification tasks, for ViT-S, ADCLR achieves 77.5 ImageNet with linear probing, outperforming our baseline (DINO) without our devised techniques as plug-in, by 0.5 accuracy on ImageNet by linear probing and finetune, outperforming iBOT by 0.3 improvements of 44.3 segmentation, outperforming previous SOTA method SelfPatch by 2.2 respectively. On ADE20K, ADCLR outperforms SelfPatch by 1.0 the segme

READ FULL TEXT
research
07/08/2022

Pixel-level Correspondence for Self-Supervised Learning from Video

While self-supervised learning has enabled effective representation lear...
research
01/13/2022

Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet?

Despite recent progress made by self-supervised methods in representatio...
research
11/21/2021

HoughCL: Finding Better Positive Pairs in Dense Self-supervised Learning

Recently, self-supervised methods show remarkable achievements in image-...
research
10/11/2022

Improving Dense Contrastive Learning with Dense Negative Pairs

Many contrastive representation learning methods learn a single global r...
research
08/12/2022

CCRL: Contrastive Cell Representation Learning

Cell identification within the H E slides is an essential prerequisite...
research
09/29/2022

Understanding Collapse in Non-Contrastive Siamese Representation Learning

Contrastive methods have led a recent surge in the performance of self-s...
research
04/19/2021

DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning

While self-supervised representation learning (SSL) has received widespr...

Please sign up or login with your details

Forgot password? Click here to reset