Do Self-Supervised and Supervised Methods Learn Similar Visual Representations?

10/01/2021
by   Tom George Grigg, et al.
0

Despite the success of a number of recent techniques for visual self-supervised deep learning, there remains limited investigation into the representations that are ultimately learned. By using recent advances in comparing neural representations, we explore in this direction by comparing a constrastive self-supervised algorithm (SimCLR) to supervision for simple image data in a common architecture. We find that the methods learn similar intermediate representations through dissimilar means, and that the representations diverge rapidly in the final few layers. We investigate this divergence, finding that it is caused by these layers strongly fitting to the distinct learning objectives. We also find that SimCLR's objective implicitly fits the supervised objective in intermediate layers, but that the reverse is not true. Our work particularly highlights the importance of the learned intermediate representations, and raises important questions for auxiliary task design.

READ FULL TEXT

page 2

page 3

page 9

research
06/05/2021

Integrating Auxiliary Information in Self-supervised Learning

This paper presents to integrate the auxiliary information (e.g., additi...
research
11/25/2021

Self-Distilled Self-Supervised Representation Learning

State-of-the-art frameworks in self-supervised learning have recently sh...
research
10/16/2020

On the surprising similarities between supervised and self-supervised models

How do humans learn to acquire a powerful, flexible and robust represent...
research
03/22/2016

Learning Representations for Automatic Colorization

We develop a fully automatic image colorization system. Our approach lev...
research
11/02/2022

RegCLR: A Self-Supervised Framework for Tabular Representation Learning in the Wild

Recent advances in self-supervised learning (SSL) using large models to ...
research
03/01/2023

Can representation learning for multimodal image registration be improved by supervision of intermediate layers?

Multimodal imaging and correlative analysis typically require image alig...
research
06/01/2018

Inverting Supervised Representations with Autoregressive Neural Density Models

Understanding the nature of representations learned by supervised machin...

Please sign up or login with your details

Forgot password? Click here to reset