-
Understanding self-supervised Learning Dynamics without Contrastive Pairs
Contrastive approaches to self-supervised learning (SSL) learn represent...
read it
-
ISD: Self-Supervised Learning by Iterative Similarity Distillation
Recently, contrastive learning has achieved great results in self-superv...
read it
-
Run Away From your Teacher: Understanding BYOL by a Novel Self-Supervised Approach
Recently, a newly proposed self-supervised framework Bootstrap Your Own ...
read it
-
Super-Selfish: Self-Supervised Learning on Images with PyTorch
Super-Selfish is an easy to use PyTorch framework for image-based self-s...
read it
-
Evaluation of Out-of-Distribution Detection Performance of Self-Supervised Learning in a Controllable Environment
We evaluate the out-of-distribution (OOD) detection performance of self-...
read it
-
Contrastive Self-Supervised Learning for Wireless Power Control
We propose a new approach for power control in wireless networks using s...
read it
Understanding Self-supervised Learning with Dual Deep Networks
We propose a novel theoretical framework to understand self-supervised learning methods that employ dual pairs of deep ReLU networks (e.g., SimCLR, BYOL). First, we prove that in each SGD update of SimCLR, the weights at each layer are updated by a covariance operator that specifically amplifies initial random selectivities that vary across data samples but survive averages over data augmentations, which we show leads to the emergence of hierarchical features, if the input data are generated from a hierarchical latent tree model. With the same framework, we also show analytically that BYOL works due to an implicit contrastive term, acting as an approximate covariance operator. The term is formed by the inter-play between the zero-mean operation of BatchNorm and the extra predictor in the online network. Extensive ablation studies justify our theoretical findings.
READ FULL TEXT
Comments
There are no comments yet.