High Fidelity Visualization of What Your Self-Supervised Representation Knows About

12/16/2021
by   Florian Bordes, et al.
7

Discovering what is learned by neural networks remains a challenge. In self-supervised learning, classification is the most common task used to evaluate how good a representation is. However, relying only on such downstream task can limit our understanding of how much information is retained in the representation of a given input. In this work, we showcase the use of a conditional diffusion based generative model (RCDM) to visualize representations learned with self-supervised models. We further demonstrate how this model's generation quality is on par with state-of-the-art generative models while being faithful to the representation used as conditioning. By using this new tool to analyze self-supervised models, we can show visually that i) SSL (backbone) representation are not really invariant to many data augmentation they were trained on. ii) SSL projector embedding appear too invariant for tasks like classifications. iii) SSL representations are more robust to small adversarial perturbation of their inputs iv) there is an inherent structure learned with SSL model that can be used for image manipulation.

READ FULL TEXT

page 22

page 23

page 25

page 26

page 29

page 31

page 32

page 39

research
03/03/2022

Understanding Failure Modes of Self-Supervised Learning

Self-supervised learning methods have shown impressive results in downst...
research
01/23/2023

ECGAN: Self-supervised generative adversarial network for electrocardiography

High-quality synthetic data can support the development of effective pre...
research
12/07/2021

ViewCLR: Learning Self-supervised Video Representation for Unseen Viewpoints

Learning self-supervised video representation predominantly focuses on d...
research
08/10/2023

Masked Diffusion as Self-supervised Representation Learner

Denoising diffusion probabilistic models have recently demonstrated stat...
research
05/13/2022

Toward a Geometrical Understanding of Self-supervised Contrastive Learning

Self-supervised learning (SSL) is currently one of the premier technique...
research
02/16/2022

Planckian jitter: enhancing the color quality of self-supervised visual representations

Several recent works on self-supervised learning are trained by mapping ...
research
06/04/2019

Information Competing Process for Learning Diversified Representations

Learning representations with diversified information remains an open pr...

Please sign up or login with your details

Forgot password? Click here to reset