-
CompRess: Self-Supervised Learning by Compressing Representations
Self-supervised learning aims to learn good representations with unlabel...
read it
-
Understanding self-supervised Learning Dynamics without Contrastive Pairs
Contrastive approaches to self-supervised learning (SSL) learn represent...
read it
-
Understanding Self-supervised Learning with Dual Deep Networks
We propose a novel theoretical framework to understand self-supervised l...
read it
-
Explicit homography estimation improves contrastive self-supervised learning
The typical contrastive self-supervised algorithm uses a similarity meas...
read it
-
Geography-Aware Self-Supervised Learning
Contrastive learning methods have significantly narrowed the gap between...
read it
-
Self-Supervised Ranking for Representation Learning
We present a new framework for self-supervised representation learning b...
read it
-
MedAug: Contrastive learning leveraging patient metadata improves representations for chest X-ray interpretation
Self-supervised contrastive learning between pairs of multiple views of ...
read it
ISD: Self-Supervised Learning by Iterative Similarity Distillation
Recently, contrastive learning has achieved great results in self-supervised learning, where the main idea is to push two augmentations of an image (positive pairs) closer compared to other random images (negative pairs). We argue that not all random images are equal. Hence, we introduce a self supervised learning algorithm where we use a soft similarity for the negative images rather than a binary distinction between positive and negative pairs. We iteratively distill a slowly evolving teacher model to the student model by capturing the similarity of a query image to some random images and transferring that knowledge to the student. We argue that our method is less constrained compared to recent contrastive learning methods, so it can learn better features. Specifically, our method should handle unbalanced and unlabeled data better than existing contrastive learning methods, because the randomly chosen negative set might include many samples that are semantically similar to the query image. In this case, our method labels them as highly similar while standard contrastive methods label them as negative pairs. Our method achieves better results compared to state-of-the-art models like BYOL and MoCo on transfer learning settings. We also show that our method performs better in the settings where the unlabeled data is unbalanced. Our code is available here: https://github.com/UMBCvision/ISD.
READ FULL TEXT
Comments
There are no comments yet.