Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet?

01/13/2022
by   Nenad Tomašev, et al.
0

Despite recent progress made by self-supervised methods in representation learning with residual networks, they still underperform supervised learning on the ImageNet classification benchmark, limiting their applicability in performance-critical settings. Building on prior theoretical insights from Mitrovic et al., 2021, we propose ReLICv2 which combines an explicit invariance loss with a contrastive objective over a varied set of appropriately constructed data views. ReLICv2 achieves 77.1 ImageNet using linear evaluation with a ResNet50 architecture and 80.6 larger ResNet models, outperforming previous state-of-the-art self-supervised approaches by a wide margin. Most notably, ReLICv2 is the first representation learning method to consistently outperform the supervised baseline in a like-for-like comparison using a range of standard ResNet architectures. Finally we show that despite using ResNet encoders, ReLICv2 is comparable to state-of-the-art self-supervised vision transformers.

READ FULL TEXT
research
02/13/2020

A Simple Framework for Contrastive Learning of Visual Representations

This paper presents SimCLR: a simple framework for contrastive learning ...
research
12/05/2019

Self-Supervised Learning of Video-Induced Visual Invariances

We propose a general framework for self-supervised learning of transfera...
research
08/20/2022

Looking For A Match: Self-supervised Clustering For Automatic Doubt Matching In e-learning Platforms

Recently, e-learning platforms have grown as a place where students can ...
research
01/31/2022

Adversarial Masking for Self-Supervised Learning

We propose ADIOS, a masked image model (MIM) framework for self-supervis...
research
04/19/2021

DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning

While self-supervised representation learning (SSL) has received widespr...
research
06/23/2023

Patch-Level Contrasting without Patch Correspondence for Accurate and Dense Contrastive Representation Learning

We propose ADCLR: A ccurate and D ense Contrastive Representation Learni...
research
05/17/2023

State Representation Learning Using an Unbalanced Atlas

The manifold hypothesis posits that high-dimensional data often lies on ...

Please sign up or login with your details

Forgot password? Click here to reset