What shapes the loss landscape of self-supervised learning?

10/02/2022
by   Liu Ziyin, et al.
0

Prevention of complete and dimensional collapse of representations has recently become a design principle for self-supervised learning (SSL). However, questions remain in our theoretical understanding: When do those collapses occur? What are the mechanisms and causes? We provide answers to these questions by thoroughly analyzing SSL loss landscapes for a linear model. We derive an analytically tractable theory of SSL landscape and show that it accurately captures an array of collapse phenomena and identifies their causes. Finally, we leverage the interpretability afforded by the analytical theory to understand how dimensional collapse can be beneficial and what affects the robustness of SSL against data imbalance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/27/2022

Why does Self-Supervised Learning for Speech Recognition Benefit Speaker Recognition?

Recently, self-supervised learning (SSL) has demonstrated strong perform...
research
10/13/2022

Demystifying Self-supervised Trojan Attacks

As an emerging machine learning paradigm, self-supervised learning (SSL)...
research
07/15/2023

Does Double Descent Occur in Self-Supervised Learning?

Most investigations into double descent have focused on supervised model...
research
06/10/2021

Automated Self-Supervised Learning for Graphs

Graph self-supervised learning has gained increasing attention due to it...
research
07/20/2022

What Do We Maximize in Self-Supervised Learning?

In this paper, we examine self-supervised learning methods, particularly...
research
04/03/2023

Charting the Topography of the Neural Network Landscape with Thermal-Like Noise

The training of neural networks is a complex, high-dimensional, non-conv...

Please sign up or login with your details

Forgot password? Click here to reset