Hyperspherically Regularized Networks for BYOL Improves Feature Uniformity and Separability

04/29/2021
by   Aiden Durrant, et al.
6

Bootstrap Your Own Latent (BYOL) introduced an approach to self-supervised learning avoiding the contrastive paradigm and subsequently removing the computational burden of negative sampling. However, feature representations under this paradigm are poorly distributed on the surface of the unit-hypersphere representation space compared to contrastive methods. This work empirically demonstrates that feature diversity enforced by contrastive losses is beneficial when employed in BYOL, and as such, provides greater inter-class feature separability. Therefore to achieve a more uniform distribution of features, we advocate the minimization of hyperspherical energy (i.e. maximization of entropy) in BYOL network weights. We show that directly optimizing a measure of uniformity alongside the standard loss, or regularizing the networks of the BYOL architecture to minimize the hyperspherical energy of neurons can produce more uniformly distributed and better performing representations for downstream tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/10/2022

Self-supervised learning of audio representations using angular contrastive loss

In Self-Supervised Learning (SSL), various pretext tasks are designed fo...
research
02/12/2021

Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning

Contrastive self-supervised learning (CSL) leverages unlabeled data to t...
research
10/01/2021

Stochastic Contrastive Learning

While state-of-the-art contrastive Self-Supervised Learning (SSL) models...
research
06/23/2023

Manifold Contrastive Learning with Variational Lie Group Operators

Self-supervised learning of deep neural networks has become a prevalent ...
research
10/18/2022

Rethinking Prototypical Contrastive Learning through Alignment, Uniformity and Correlation

Contrastive self-supervised learning (CSL) with a prototypical regulariz...
research
05/31/2021

Toward Understanding the Feature Learning Process of Self-supervised Contrastive Learning

How can neural networks trained by contrastive learning extract features...
research
06/08/2023

Contrastive Representation Disentanglement for Clustering

Clustering continues to be a significant and challenging task. Recent st...

Please sign up or login with your details

Forgot password? Click here to reset